ML Engine Online Prediction - Unexpected tensor name: values - google-cloud-platform

I get the following error when trying to make an online prediction on my ML Engine model.
The key "values" is not correct. (See error on image.)
enter image description here
I already tested with RAW image data : {"image_bytes":{"b64": base64.b64encode(jpeg_data)}}
& Converted the data to a numpy array.
Currently I have the following code:
from googleapiclient import discovery
import base64
import os
from PIL import Image
import json
import numpy as np
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/Users/jacob/Desktop/******"
def predict_json(project, model, instances, version=None):
"""Send json data to a deployed model for prediction.
Args:
project (str): project where the Cloud ML Engine Model is deployed.
model (str): model name.
instances ([Mapping[str: Any]]): Keys should be the names of Tensors
your deployed model expects as inputs. Values should be datatypes
convertible to Tensors, or (potentially nested) lists of datatypes
convertible to tensors.
version: str, version of the model to target.
Returns:
Mapping[str: any]: dictionary of prediction results defined by the
model.
"""
# Create the ML Engine service object.
# To authenticate set the environment variable
# GOOGLE_APPLICATION_CREDENTIALS=<path_to_service_account_file>
service = discovery.build('ml', 'v1')
name = 'projects/{}/models/{}'.format(project, model)
if version is not None:
name += '/versions/{}'.format(version)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
return response['predictions']
savepath = 'upload/11277229_F.jpg'
img = Image.open('test/01011000/11277229_F.jpg')
test = img.resize((299, 299))
test.save(savepath)
img1 = open(savepath, "rb").read()
def load_image(filename):
with open(filename) as f:
return np.array(f.read())
predict_json('image-recognition-25***08', 'm500_200_waug', [{"values": str(base64.b64encode(img1).decode("utf-8")), "key": '87'}], 'v1')

The error message itself indicates (as you point out in the question), that the key "values" is not one of the inputs specified in the model. To inspect the model's input, use saved_model_cli show --all --dir=/path/to/model. That will show you a list of the names of the inputs. You'll need to use the correct name.
That said, it appears there is another issue. It's not clear from the question what type of input your model is expecting, though it's likely one of two things:
A matrix of integers or floats
A byte string with the raw image file
contents.
The exact solution will depend on which of the above your exported model is using. saved_model_cli will help here, based on the type and shape of the input. It will either be DT_FLOAT32 (or some other int/float type) and [NONE, 299, 299, CHANNELS] or DT_STRING and [NONE], respectively.
If your model is type (1), then you will need to send a matrix of ints/floats (which does not use base64 encoding):
predict_json('image-recognition-25***08', 'm500_200_waug', [{CORRECT_INPUT_NAME: load_image(savepath).tolist(), "key": '87'}], 'v1')
Note the use of tolist to convert the numpy array to a list of lists.
In the case of type (2), you need to tell the service you have some base64 data by adding in {"b64": ...}:
predict_json('image-recognition-25***08', 'm500_200_waug', [{CORRECT_INPUT_NAME: {"b64": str(base64.b64encode(img1).decode("utf-8"))}, "key": '87'}], 'v1')
All of this, of course, depends on using the correct name for CORRECT_INPUT_NAME.
One final note, I'm assuming your model actually does have key as an additional inputs since you included it in your request; again, that can all be verified against the output of saved_model_cli show.

I used to get this errors too. If anyone comes across this error, and using gcloud.
Tensors are automatically called csv_rows. For example this works for me now
"instances": [{
"csv_row": "STRING,7,4.02611534,9,14,0.66700000,0.17600000,0.00000000,0.00000000,1299.76500000,57",
"key": "0"
}]

Related

Why did I get 2 different results from two models with same parameters and inputs?

I loaded resnet18 into my two models (model1 and model2), with pretrained weights.
I want to use them as feature extractors
For model1: I freezed the parameters except the last linear layer model1.fc, then train it. After training, I set model1.fc into torch.nn.Identity()
For model2: I directly set model2.fc into torch.nn.Identity()
Then these 2 models should be the same, but I get different forward result from the same inputs.
If the training of model1 is not done, they will have the same result, maybe something wrong with the freezing of parameter.
However, I checked their parameters after the training of model1 and setting the last layer of both models to identity layer, and they seems to be the same.
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms, models
# Load weights pretrained on ImageNet
def load_weights(model):
model_dir = "....."
model.load_state_dict(tor`enter code here`ch.utils.model_zoo.load_url("https://download.pytorch.org/models/resnet18-5c106cde.pth", model_dir=model_dir))
return model
model1=models.resnet18()
model1=load_weights(model1)
for param in model1.parameters():
param.requires_grad = False
model1.fc=nn.Linear(512, 2)
model1.cuda()
optimizer = optim.SGD(model1.fc.parameters(), lr=1e-2, momentum=0.9)
result_freeze = \
run_training(model1, optimizer, device, train_loader, val_loader,num_epochs=10)
model2=models.resnet18()
model2=load_weights(model2)
model2.fc=nn.Identity()
model2.cuda()
model1.fc=nn.Identity()
model1.cuda()
# checking forward results(extracting features)
# The batch size is one here
for batch_idx, (data, target) in enumerate(X_train):
data, target = data.to(device), target.to(device)
d=data
X_train_feature[batch_idx]=model1(data).cpu().detach().numpy()
y_train[batch_idx]=target.cpu().detach().numpy()
X_train2_feature[batch_idx]=model2(d).cpu().detach().numpy()
y_train2[batch_idx]=target.cpu().detach().numpy()
print(sum(X_train_feature[batch_idx]==X_train2_feature[batch_idx]))
print(sum(y_train[batch_idx]==y_train2[batch_idx]))
print(torch.sum(d==data))
for batch_idx, (data, target) in enumerate(X_test):
data, target = data.to(device), target.to(device)
d=data
X_test_feature[batch_idx]=model1(data).cpu().detach().numpy()
y_test[batch_idx]=target.cpu().detach().numpy()
X_test2_feature[batch_idx]=model2(d).cpu().detach().numpy()
y_test2[batch_idx]=target.cpu().detach().numpy()
print(sum(X_test_feature[batch_idx]==X_test2_feature[batch_idx]))
print(sum(y_test[batch_idx]==y_test2[batch_idx]))
print(torch.sum(d==data))
# checking parameters
for a,b in zip(model1.parameters(),model2.parameters()):
print(torch.sum(a!=b))
Expect to get the same forward results from model1 and model2, but they are different. And If they produce different forward results, why do they have exactly the same parameters?
Have you taken into account changes that might occur to BatchNorm layers?
Batch norm layers do not behave like normal layers - their internal parameters are modified by computing running mean and std of the data, and not by gradient descent.
Try setting model1.eval() before the finetune and then check.

Failed precondition: Table not initialized. on deployed universal sentence encoder from aws sagemaker

I have deployed a the universal_sentence_encoder_large_3 to an aws sagemaker. When I am attempting to predict with the deployed model I get Failed precondition: Table not initialized. as an error. I have included the part where I save my model below:
import tensorflow as tf
import tensorflow_hub as hub
import numpy as np
def tfhub_to_savedmodel(model_name, export_path):
model_path = '{}/{}/00000001'.format(export_path, model_name)
tfhub_uri = 'http://tfhub.dev/google/universal-sentence-encoder-large/3'
with tf.Session() as sess:
module = hub.Module(tfhub_uri)
sess.run([tf.global_variables_initializer(), tf.tables_initializer()])
input_params = module.get_input_info_dict()
dtype = input_params['text'].dtype
shape = input_params['text'].get_shape()
# define the model inputs
inputs = {'text': tf.placeholder(dtype, shape, 'text')}
output = module(inputs['text'])
outputs = {
'vector': output,
}
# export the model
tf.saved_model.simple_save(
sess,
model_path,
inputs=inputs,
outputs=outputs)
return model_path
I have seen other people ask this problem but no solution has been ever posted. It seems to be a common problem with tensorflow_hub sentence encoders
I was running into this exact issue earlier this week while trying to modify this example Sagemaker notebook. Particularly the part where serving the model. That is, running predictor.predict() on the Sagemaker Tensorflow Estimator.
The solution outlined in the issue worked perfectly for me- https://github.com/awslabs/amazon-sagemaker-examples/issues/773#issuecomment-509433290
I think it's just because tf.tables_initializer() only runs for training but it needs to be specified through the legacy_init_op if you want to run it during prediction.

How can I debug predictions on ML Engine, predictions returns empty array

I am implementing a tfx pipeline, similar to the chicago taxi example
The prediction of the pushed model returns {"predictions": []}.
How do I debug this issue?
I can see logs of the predictions being made. But because it returns an empty array the status code is 200 and there is no usefull information on what went wrong. I expect the prediction request data isn't passed correctly to the estimator.
The chicago example uses this as their serving receiver and that works. I assume it should also work for my example
def _example_serving_receiver_fn(transform_output, schema):
"""Build the serving in inputs.
Args:
transform_output: directory in which the tf-transform model was written
during the preprocessing step.
schema: the schema of the input data.
Returns:
Tensorflow graph which parses examples, applying tf-transform to them.
"""
raw_feature_spec = _get_raw_feature_spec(schema)
raw_feature_spec.pop(_LABEL_KEY)
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
serving_input_receiver = raw_input_fn()
transformed_features = transform_output.transform_raw_features(
serving_input_receiver.features)
return tf.estimator.export.ServingInputReceiver(
transformed_features, serving_input_receiver.receiver_tensors)
The main difference is that I only expect 1 input: a string of programming languages separated by '|': 'java|python'.
I then split that string up in my preprocessing function and make it into a multi one hot encoded array of shape 500 (I have exactly 500 options)
It could also be the case that the prediction isn't being correctly transformed by tf transform. (tf transform is part of the tfx pipeline and runs correctly)
request: {"instances": ["javascript|python"]}
response: {"predictions": []}
expected response: {"predictions": [520]} (its a regression model)

How to restrict model predicted value within range?

I want to do linear regression with aws sagemaker. Where i have trained my model with some values and it's predicting values as per inputs. but sometimes it predicts value out of range as in i am predicting percentage which can't go less than 0 and more than 100. how can i restrict it here:
sess = sagemaker.Session()
linear =
sagemaker.estimator.Estimator(containers[boto3.Session().region_name],
role,
train_instance_count=1,
train_instance_type='ml.c4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sess)
linear.set_hyperparameters(feature_dim=5,
mini_batch_size=100,
predictor_type='regressor',
epochs=10,
num_models=32,
loss='absolute_loss')
linear.fit({'train': s3_train_data, 'validation': s3_validation_data})
how can i make my model not to predict values out of range : [0,100].
Yes you can. You can implement the output_fn to "brick wall" your output. SageMaker would call the output_fn after the model returns the value to do any post-processing of the result.
This can be done by creating a separate python file, specify the output_fn method there.
Provide this python file when instantiating your Estimator.
something like
sess = sagemaker.Session()
linear =
sagemaker.estimator.Estimator(containers[boto3.Session().region_name],
role,
train_instance_count=1,
train_instance_type='ml.c4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sess)
linear.set_hyperparameters(feature_dim=5,
mini_batch_size=100,
predictor_type='regressor',
epochs=10,
num_models=32,
loss='absolute_loss',
entry_point = 'entry.py'
)
linear.fit({'train': s3_train_data, 'validation': s3_validation_data})
Your entry.py could look something like
def output_fn(data, accepts):
"""
Args:
data: A result from TensorFlow Serving
accepts: The Amazon SageMaker InvokeEndpoint Accept value. The content type the response object should be
serialized to.
Returns:
object: The serialized object that will be send to back to the client.
"""
Implement the logic to "brick wall" here.
return data.outputs['outputs'].string_val

Tweepy location on Twitter API filter always throws 406 error

I'm using the following code (from django management commands) to listen to the Twitter stream - I've used the same code on a seperate command to track keywords successfully - I've branched this out to use location, and (apparently rightly) wanted to test this out without disrupting my existing analysis that's running.
I've followed the docs and have made sure the box is in Long/Lat format (in fact, I'm using the example long/lat from the Twitter docs now). It looks broadly the same as the question here, and I tried using their version of the code from the answer - same error. If I switch back to using 'track=...', the same code works, so it's a problem with the location filter.
Adding a print debug inside streaming.py in tweepy so I can see what's happening, I print out the self.parameters self.url and self.headers from _run, and get:
{'track': 't,w,i,t,t,e,r', 'delimited': 'length', 'locations': '-121.7500,36.8000,-122.7500,37.8000'}
/1.1/statuses/filter.json?delimited=length and
{'Content-type': 'application/x-www-form-urlencoded'}
respectively - seems to me to be missing the search for location in some way shape or form. I don't believe I'm/I'm obviously not the only one using tweepy location search, so think it's more likely a problem in my use of it than a bug in tweepy (I'm on 2.3.0), but my implementation looks right afaict.
My stream handling code is here:
consumer_key = 'stuff'
consumer_secret = 'stuff'
access_token='stuff'
access_token_secret_var='stuff'
import tweepy
import json
# This is the listener, resposible for receiving data
class StdOutListener(tweepy.StreamListener):
def on_data(self, data):
# Twitter returns data in JSON format - we need to decode it first
decoded = json.loads(data)
#print type(decoded), decoded
# Also, we convert UTF-8 to ASCII ignoring all bad characters sent by users
try:
user, created = read_user(decoded)
print "DEBUG USER", user, created
if decoded['lang'] == 'en':
tweet, created = read_tweet(decoded, user)
print "DEBUG TWEET", tweet, created
else:
pass
except KeyError,e:
print "Error on Key", e
pass
except DataError, e:
print "DataError", e
pass
#print user, created
print ''
return True
def on_error(self, status):
print status
l = StdOutListener()
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret_var)
stream = tweepy.Stream(auth, l)
#locations must be long, lat
stream.filter(locations=[-121.75,36.8,-122.75,37.8], track='twitter')
The issue here was the order of the coordinates.
Correct format is:
SouthWest Corner(Long, Lat), NorthEast Corner(Long, Lat). I had them transposed. :(
The streaming API doesn't allow to filter by location AND keyword simultaneously.
you must refer to this answer i had the same problem earlier
https://stackoverflow.com/a/22889470/4432830