I'm using Google Cloud Machine Learning to predict images with labels.
I've trained my model, named flower and I see the API end point at Google API Exporer but, when I call the API at API Explorer, I get the following error:
Image Error
I can't understanding why.
Thanks
Ibere
I guess you followed the tutorial from https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/flowers?
I had the exact same problem but with some trial and errors I succeeded with the payload:
{"instances":[{"image_bytes": {"b64": "/9j/4AAQ...rest of the base64..."}, "key": "0"}]}
Related
My goal is to get a new column in Power BI with keyphrases based on a column with text data. I try to connect the Azure text analytics API to PowerBI. I use this tutorial:
https://learn.microsoft.com/nl-nl/azure/cognitive-services/text-analytics/tutorials/tutorial-power-bi-key-phrases
After I invoke the custom function, and set the authentication and privacy to "anonymous" and "public", the KeyPhrases column I get only contains the values "Error" with the following description:
An error occurred in the ‘’ query. DataSource.Error: Web.Contents failed to get contents from 'https://******.cognitiveservices.azure.com/.cognitiveservices.azure.com/text/analytics/v2.1/keyPhrases' (404): Resource Not Found
Details:
DataSourceKind=Web
DataSourcePath=https://*******.cognitiveservices.azure.com/.cognitiveservices.azure.com/text/analytics/v2.1/keyPhrases
Url=https://******.cognitiveservices.azure.com/.cognitiveservices.azure.com/text/analytics/v2.1/keyPhrases
Also, not sure if it is related to my issue, but I see the following warning on my Azure account in the Networking menu:
"VNet setting is not supported for current API type or resource location."
I checked all the steps in the tutorial, I re-entered the authentication and privacy settings. Also, I tried the same for the sentiment analysis function. Finally, I tried everything on a different and very simplistic dataset.
Not sure what the cause of my issue is and how to solve it.
Any suggestions would be much appreciated.
Best, Rosanne
Look at your error message:
'https://******.cognitiveservices.azure.com/.cognitiveservices.azure.com/text/analytics/v2.1/keyPhrases' (404): Resource Not Found Details: DataSourceKind=Web DataSourcePath=https://*******.cognitiveservices.azure.com/.cognitiveservices.azure.com/text/analytics/v2.1/keyPhrases Url=https://******.cognitiveservices.azure.com/.cognitiveservices.azure.com/text/analytics/v2.1/keyPhrases
It throw a 404, so you are pointing to the wrong URL.
And s you can see in the beginning of your url:
https://******.cognitiveservices.azure.com/.cognitiveservices.azure.com << here you have twice ".cognitiveservices.azure.com/" so you url setup is wrong.
I don't know exactly how it is setup on your side, but you may have provided a region or endpoint during it and it's here where you put the wrong value.
We have suddenly started experiencing an error when using the DialogFlow "restore agent" API. The call is failing with the error:
400 com.google.apps.framework.request.BadRequestException: Invalid
agent zip. Missing required json file agent.json
Oddly, it only seems to happen for newly created DialogFlow agents, but not for older/existing ones. We are using this API so that we can programmatically create a custom agent using our own intents/entities. This code has been working for about the past two years, with no changes on our side. We are using the official DialogFlow client library for Python. We have been on version 0.2.0, and I tried updating to the latest (0.8.0) but there was no change.
I tried changing our code to include the agent.json file (by using the "export agent" API and getting the agent.json file from there). In that case, I no longer get the above error and the restore appears to succeed. However, the agent then seems to be corrupt in some way. When trying to click on any intent -- or various other operations in the DialogFlow console -- I get the error:
Failed to get Training Phrases Errorid=xxx
(where xxx seems to be a UUID that changes each time)
Trying to export the agent in that state also displays an error:
Error downloading agent
Occasionally, even including the agent.json as above, the restore will still fail but return the error:
500 Internal error encountered.
I appreciate any ideas on how we can get this working again. Thanks!
After a lot of trial and error I found the solution. Here it is in case anyone else runs into this. Something must have changed recently in how DialogFlow processes the zip upload during the "restore agent" operation --
1) The agent.json file is now required in the zip file, where before it was optional
2) We found some of the "id" elements in our _usersays files for various intents were not valid UUIDs. Previously this did not cause any error, but now the agent winds up in an invalid state ("Failed to get Training Phrases" error, etc as mentioned above).
Easy way to fix is to export one of the existing agents and copy it's agent.json and package.json into your current directory before uploading.
agent.json is now required by dialogflow.
I have trained a multiclass classification model on the wine quality dataset and I have deployed the model.
After deploying the model I got EndPoint URL like:
https://runtime.sagemaker.region.amazonaws.com/endpoints/experiment/invocations
And I am invoking the URL after passing AWS credentials and body like:
{
"instances": [7.4,0.7,0,1.9,0.076,11,34,0.9978,3.51,0.56,9.4]
}
But I am getting below error:
{
"ErrorCode": "CLIENT_ERROR_FROM_MODEL",
"LogStreamArn": "",
"OriginalMessage": "'application/json' is an unsupported content type.",
"OriginalStatusCode": 415
}
I tried looking for the trace logs in the cloud watch but no traces there as well. Anyone could guide me on this?
I have trained a model using Sage Maker Studion.
The message "'application/json' is an unsupported content type." seems to show your issue. Most likely your inference container does not support JSON content type, so you will need to use a content type that the container supports.
I am trying to serve a prediction using google cloud ml engine. I generated my model using fast-style-transfer and saved it on my google cloud ml engine's models section. For input it use float32 and so I had to convert my image in this format.
image = tf.image.convert_image_dtype(im, dtypes.float32)
matrix_test = image.eval()
Then I generated my json file for the request:
js = json.dumps({"image": matrix_test.tolist()})
Using the following code:
gcloud ml-engine predict --model {model-name} --json-instances request.json
The following error is returned:
ERROR: (gcloud.ml-engine.predict) HTTP request failed. Response: {
"error": {
"code": 400,
"message": "Request payload size exceeds the limit: 1572864 bytes.",
"status": "INVALID_ARGUMENT"
}
}
I would like to know if I can increment this limit and, if not, if there is a way to fix it with a workaround... thanks in advance!
This is a hard limit for the Cloud Machine Learning Engine API. There's a feature request to increase this limit. You could post a comment there asking for an update. Moreover, you could try the following solution in the meantime.
Hope it helps
If you use the batch prediction, you can make predictions on images that exceed that limit.
Here is the official documentation on that: https://cloud.google.com/ml-engine/docs/tensorflow/batch-predict
I hope this helps you in some way!
I loaded a dataset into google automl, using the UI. I got the message that I have enough labeled text and can start training, however when I click on Start Training, I get the error
Exception while handling your request: Request contains an invalid argument.
When reporting refer to this issue by its tracking code tc_698293
As I am using the UI, I don't know what the arguments of the request are. Any help is greatly appreciated. Thanks.
It is required to have at least 2 examples in all of TRAIN, TEST and VALIDATION set to start training.
The error message could be more clear about that and the UI could check for that condition and warn users early. In short term better error message will be provided.