AWS SageMaker EndPoint returns 415 - amazon-web-services

I have trained a multiclass classification model on the wine quality dataset and I have deployed the model.
After deploying the model I got EndPoint URL like:
https://runtime.sagemaker.region.amazonaws.com/endpoints/experiment/invocations
And I am invoking the URL after passing AWS credentials and body like:
{
"instances": [7.4,0.7,0,1.9,0.076,11,34,0.9978,3.51,0.56,9.4]
}
But I am getting below error:
{
"ErrorCode": "CLIENT_ERROR_FROM_MODEL",
"LogStreamArn": "",
"OriginalMessage": "'application/json' is an unsupported content type.",
"OriginalStatusCode": 415
}
I tried looking for the trace logs in the cloud watch but no traces there as well. Anyone could guide me on this?
I have trained a model using Sage Maker Studion.

The message "'application/json' is an unsupported content type." seems to show your issue. Most likely your inference container does not support JSON content type, so you will need to use a content type that the container supports.

Related

VertexAI Batch Inference Failing for Custom Container Model

I'm having trouble executing VertexAI's batch inference, despite endpoint deployment and inference working perfectly. My TensorFlow model has been trained in a custom Docker container with the following arguments:
aiplatform.CustomContainerTrainingJob(
display_name=display_name,
command=["python3", "train.py"],
container_uri=container_uri,
model_serving_container_image_uri=container_uri,
model_serving_container_environment_variables=env_vars,
model_serving_container_predict_route='/predict',
model_serving_container_health_route='/health',
model_serving_container_command=[
"gunicorn",
"src.inference:app",
"--bind",
"0.0.0.0:5000",
"-k",
"uvicorn.workers.UvicornWorker",
"-t",
"6000",
],
model_serving_container_ports=[5000],
)
I have a Flask endpoint defined for predict and health essentially defined below:
#app.get(f"/health")
def health_check_batch():
return 200
#app.post(f"/predict")
def predict_batch(request_body: dict):
pred_df = pd.DataFrame(request_body['instances'],
columns = request_body['parameters']['columns'])
# do some model inference things
return {"predictions": predictions.tolist()}
As described, when training a model and deploying to an endpoint, I can successfully hit the API with JSON schema like:
{"instances":[[1,2], [1,3]], "parameters":{"columns":["first", "second"]}}
This also works when using the endpoint Python SDK and feeding in instances/parameters as functional arguments.
However, I've tried performing batch inference with a CSV file and a JSONL file, and every time it fails with an Error Code 3. I can't find logs on why it failed in Logs Explorer either. I've read through all the documentation I could find and have seen other's successfully invoke batch inference, but haven't been able to find a guide. Does anyone have recommendations on batch file structure or the structure of my APIs? Thank you!

I'm getting an error creating a AWS AppSync Authenticated DataSource

I working through the Build On Serverless|S2 E4 video and I've gotten to the point of creating an authenticated HTTP datasource using the AWS CLI. I'm getting this error.
Parameter validation failed:
Unknown parameter in httpConfig: "authorizationConfig", must be one of: endpoint
I think I'm using the same information provided in the video, repository and gist, updated for my own aws account. It seems like it's some kind of formatting or missing information error, but, I'm just not seeing the problem.
When I remove the "authorizationConfig" property from the state-machine-datasource.json the command works.
I've reviewed the code against the information in the video as well as documentation and examples here and here provided by aws
This is the command I'm running.
aws appsync create-data-source --api-id {my app sync app id} --name ProcessBookingStateMachine
--type HTTP --http-config file://src/backend/booking/state-machine-datasource.json
--service-role-arn arn:aws:iam::{my account}:role/AppSyncProcessBookingState --profile default
This is my state-machine-datasource.json:
{
"endpoint": "https://states.us-east-2.amazonaws.com",
"authorizationConfig": {
"authorizationType": "AWS_IAM",
"awsIamConfig": {
"signingRegion": "us-east-2",
"signingServiceName": "states"
}
}
}
Thanks,
I needed to update my aws cli to the latest version. The authenticated http datasource is something fairly new I guess.

"Request payload size exceeds the limit" in google cloud json prediction request

I am trying to serve a prediction using google cloud ml engine. I generated my model using fast-style-transfer and saved it on my google cloud ml engine's models section. For input it use float32 and so I had to convert my image in this format.
image = tf.image.convert_image_dtype(im, dtypes.float32)
matrix_test = image.eval()
Then I generated my json file for the request:
js = json.dumps({"image": matrix_test.tolist()})
Using the following code:
gcloud ml-engine predict --model {model-name} --json-instances request.json
The following error is returned:
ERROR: (gcloud.ml-engine.predict) HTTP request failed. Response: {
"error": {
"code": 400,
"message": "Request payload size exceeds the limit: 1572864 bytes.",
"status": "INVALID_ARGUMENT"
}
}
I would like to know if I can increment this limit and, if not, if there is a way to fix it with a workaround... thanks in advance!
This is a hard limit for the Cloud Machine Learning Engine API. There's a feature request to increase this limit. You could post a comment there asking for an update. Moreover, you could try the following solution in the meantime.
Hope it helps
If you use the batch prediction, you can make predictions on images that exceed that limit.
Here is the official documentation on that: https://cloud.google.com/ml-engine/docs/tensorflow/batch-predict
I hope this helps you in some way!

Amazon SageMaker Unsupported content-type application/x-image

I have a tensorflow/keras based CNN model deployed in Sagemaker.
Now to invoke the inference, I followed this tutorial
Below code snippet
def inferImage(endpoint_name):
# Load the image bytes
img = open('./shoe.jpg', 'rb').read()
runtime = boto3.Session().client(service_name='sagemaker-runtime')
# Call your model for predicting which object appears in this image.
response = runtime.invoke_endpoint(
EndpointName=endpoint_name,
ContentType='application/x-image',
Body=bytearray(img))
response_body = response['Body']
print(response_body.read())
When I run this code, I get error
Unsupported content-type application/x-image
What am I missing? Any suggestion on how to fix it?
Did you use SageMaker python sdk?
If yes, you could refer to this README https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_python.rst
and provide your own input_fn() to deal with application/x-image data.
If you don't provide your customized input_fn() in the user script, the default input_fn can only handle 3 types: "application/json", "text/csv" and "application/octet-stream"
The exception is thrown here: https://github.com/aws/sagemaker-tensorflow-container/blob/1e74bc6440cdd7e083d15026869e021c5ab504a4/src/tf_container/serve.py#L239

Prediction failed: unknown error

I'm using Google Cloud Machine Learning to predict images with labels.
I've trained my model, named flower and I see the API end point at Google API Exporer but, when I call the API at API Explorer, I get the following error:
Image Error
I can't understanding why.
Thanks
Ibere
I guess you followed the tutorial from https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/flowers?
I had the exact same problem but with some trial and errors I succeeded with the payload:
{"instances":[{"image_bytes": {"b64": "/9j/4AAQ...rest of the base64..."}, "key": "0"}]}