"Request payload size exceeds the limit" in google cloud json prediction request - python-2.7

I am trying to serve a prediction using google cloud ml engine. I generated my model using fast-style-transfer and saved it on my google cloud ml engine's models section. For input it use float32 and so I had to convert my image in this format.
image = tf.image.convert_image_dtype(im, dtypes.float32)
matrix_test = image.eval()
Then I generated my json file for the request:
js = json.dumps({"image": matrix_test.tolist()})
Using the following code:
gcloud ml-engine predict --model {model-name} --json-instances request.json
The following error is returned:
ERROR: (gcloud.ml-engine.predict) HTTP request failed. Response: {
"error": {
"code": 400,
"message": "Request payload size exceeds the limit: 1572864 bytes.",
"status": "INVALID_ARGUMENT"
}
}
I would like to know if I can increment this limit and, if not, if there is a way to fix it with a workaround... thanks in advance!

This is a hard limit for the Cloud Machine Learning Engine API. There's a feature request to increase this limit. You could post a comment there asking for an update. Moreover, you could try the following solution in the meantime.
Hope it helps

If you use the batch prediction, you can make predictions on images that exceed that limit.
Here is the official documentation on that: https://cloud.google.com/ml-engine/docs/tensorflow/batch-predict
I hope this helps you in some way!

Related

Google AI Platform: Error Code 400: Response size too large

I'm going to do predictions on Google Earth Engine using a deep learning model hosted on Google AI platform. The problem is that when I apply the model on the images, I get the following error in Google Earth Engine:
Error: AI Platform prediction service responded with error code 400:
'Response size too large. Received at least 3574386 bytes; max is
2000000.'. (Error code: 3)
The model has only three layers (an MLP), and the image is not big either. How can I address this issue?

AWS SageMaker EndPoint returns 415

I have trained a multiclass classification model on the wine quality dataset and I have deployed the model.
After deploying the model I got EndPoint URL like:
https://runtime.sagemaker.region.amazonaws.com/endpoints/experiment/invocations
And I am invoking the URL after passing AWS credentials and body like:
{
"instances": [7.4,0.7,0,1.9,0.076,11,34,0.9978,3.51,0.56,9.4]
}
But I am getting below error:
{
"ErrorCode": "CLIENT_ERROR_FROM_MODEL",
"LogStreamArn": "",
"OriginalMessage": "'application/json' is an unsupported content type.",
"OriginalStatusCode": 415
}
I tried looking for the trace logs in the cloud watch but no traces there as well. Anyone could guide me on this?
I have trained a model using Sage Maker Studion.
The message "'application/json' is an unsupported content type." seems to show your issue. Most likely your inference container does not support JSON content type, so you will need to use a content type that the container supports.

Google Cloud AutoML predict service returned 'Internal error encountered'

I trained a model on Google cloud vision AutoML service and whenever I try to predict an image from the console it returned 'Internal error encountered'. this is also happening from the API. it returns this json
{
"error": {
"code": 500,
"message": "Internal error encountered.",
"status": "INTERNAL"
}
}
The model has been training for 24 hours
it should return the image predicated classes as trained by the model
It turned out to be a bug in AutoML, the problem is if you have a dataset that you traied a model on, say for 1 hour, the resumed training. AutoML creates a new model rather than modifying the old one. If you deleted the old one though, the new model won't work and will show the above error.

Error Code 413 encountered in Admin SDK Directory API (Users: patch)

I am a G Suite Super Admin. I have set up Google Single Sign On (SSO) for our AWS accounts inside our G Suite. As we have several AWS accounts, we need to run the "Users: patch" (https://developers.google.com/admin-sdk/directory/v1/reference/users/patch#try-it) to include other AWS accounts for Google Single Sign On.
While provisioning additional AWS accounts to Google Single Sign on, we encountered error "Code: 413" after running the above mentioned patch. Details below:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "uploadTooLarge",
"message": "Profile quota is exceeded.: Data is too large for "
}
],
"code": 413,
"message": "Profile quota is exceeded.: Data is too large for "
}
}
What could be the possible cause of this error? Are there any workaround for this? Else, are there other ways to provision multiple AWS accounts using Google Single Sign On?
Thank you in advance for your patience and assistance in this.
Notice what the error is saying. You're probably going way beyond the designated limit. Try using the solution from this SO post which is to:
choose a different SAML ldP. New limit was said to be somewhere
between 2087 - 2315 characters

Prediction failed: unknown error

I'm using Google Cloud Machine Learning to predict images with labels.
I've trained my model, named flower and I see the API end point at Google API Exporer but, when I call the API at API Explorer, I get the following error:
Image Error
I can't understanding why.
Thanks
Ibere
I guess you followed the tutorial from https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/flowers?
I had the exact same problem but with some trial and errors I succeeded with the payload:
{"instances":[{"image_bytes": {"b64": "/9j/4AAQ...rest of the base64..."}, "key": "0"}]}