Google AI platform custom prediction routines with multiple inputs, how to read json inputs - google-cloud-platform

For creating a custom prediction routine with a Keras (Tensorflow 2.1) model, I am having trouble figuring out what form the json inputs are coming in as, and how to read them in the predictor class for multiple inputs. All of the custom prediction routine examples in the documentation use simple flat single-input lists. If for example we send in our inputs as:
{"instances": [
{
"event_type_input": [1, 2, 20],
"event_dwelltime_input": [1.368, 0.017, 0.0],
"rf_input": [1.2, -2.8]},
{
"event_type_input": [14, 40, 20],
"event_dwelltime_input": [1.758, 13.392, 0.0],
"rf_input": [1.29, -2.87]}
]}
How should we ingest the incoming json in our predictor class?
class MyPredictor(object):
def __init__(self, model):
self.model = model
def predict(self, instances, **kwargs):
inputs = np.array(instances)
# The above example from the docs is wrong for multiple inputs
# What should our inputs be to get the inputs in the right shape
# for our keras model?
outputs = self.model.predict(inputs)
return outputs.tolist()
Our json inputs to google ai platform are a list of dictionaries. However, for a keras model, our inputs need to be in different shape, like the following:
inputs = {
"event_type_input": np.array([[1, 2, 20], [14, 40, 20]]),
"event_dwelltime_input": np.array([[1.368, 0.017, 0.0], [1.758, 13.392, 0.0]])
"rf_input": np.array([[1.2, -2.8], [1.29, -2.87]]}
model.predict(inputs)
Am I right that the thing to do then is just reshape the instances? The only confusion is that if using the tensorflow framework (instead of a custom prediction routine), it handles predicting on the json input fine, and I thought that all the tensorflow framework is doing is calling the .predict method on the instances (unless indeed there is some under-the-hood reshaping of the data. I couldn't find a source to find out what is exactly happening)
Main question: How should we write our predictor class to take in the instances such that we can run the model.predict method on it?

I would suggest creating a new Keras Model and exporting it.
Create a separate Input layer for each of the three inputs to the new Model (with the name of the Input being the name in your JSON struct). Then, in this Model, reshape the inputs, and borrow the weights/structure from your trained model, and export the new model. Something like this:
trained_model = keras.models.load_model(...) # trained model
input1 = keras.Input(..., name='event_type_input')
input2 = keras.Input(..., name='event_dwelltime_input')
input3 = keras.Input(..., name='rf_input')
export_inputs = keras.concatenate([input1, input2, input3])
reshaped_inputs = keras.layers.Lambda(...) # reshape to what the first hidden layer of your trained model expects
layer1 = trained_model.get_layer(index=1)(reshaped_inputs)
layer2 = trained_model.get_layer(index=2)(layer1) # etc. ...
...
exportModel = keras.Model(export_inputs, export_output)
exportModel.save(...)

Related

Why did I get 2 different results from two models with same parameters and inputs?

I loaded resnet18 into my two models (model1 and model2), with pretrained weights.
I want to use them as feature extractors
For model1: I freezed the parameters except the last linear layer model1.fc, then train it. After training, I set model1.fc into torch.nn.Identity()
For model2: I directly set model2.fc into torch.nn.Identity()
Then these 2 models should be the same, but I get different forward result from the same inputs.
If the training of model1 is not done, they will have the same result, maybe something wrong with the freezing of parameter.
However, I checked their parameters after the training of model1 and setting the last layer of both models to identity layer, and they seems to be the same.
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms, models
# Load weights pretrained on ImageNet
def load_weights(model):
model_dir = "....."
model.load_state_dict(tor`enter code here`ch.utils.model_zoo.load_url("https://download.pytorch.org/models/resnet18-5c106cde.pth", model_dir=model_dir))
return model
model1=models.resnet18()
model1=load_weights(model1)
for param in model1.parameters():
param.requires_grad = False
model1.fc=nn.Linear(512, 2)
model1.cuda()
optimizer = optim.SGD(model1.fc.parameters(), lr=1e-2, momentum=0.9)
result_freeze = \
run_training(model1, optimizer, device, train_loader, val_loader,num_epochs=10)
model2=models.resnet18()
model2=load_weights(model2)
model2.fc=nn.Identity()
model2.cuda()
model1.fc=nn.Identity()
model1.cuda()
# checking forward results(extracting features)
# The batch size is one here
for batch_idx, (data, target) in enumerate(X_train):
data, target = data.to(device), target.to(device)
d=data
X_train_feature[batch_idx]=model1(data).cpu().detach().numpy()
y_train[batch_idx]=target.cpu().detach().numpy()
X_train2_feature[batch_idx]=model2(d).cpu().detach().numpy()
y_train2[batch_idx]=target.cpu().detach().numpy()
print(sum(X_train_feature[batch_idx]==X_train2_feature[batch_idx]))
print(sum(y_train[batch_idx]==y_train2[batch_idx]))
print(torch.sum(d==data))
for batch_idx, (data, target) in enumerate(X_test):
data, target = data.to(device), target.to(device)
d=data
X_test_feature[batch_idx]=model1(data).cpu().detach().numpy()
y_test[batch_idx]=target.cpu().detach().numpy()
X_test2_feature[batch_idx]=model2(d).cpu().detach().numpy()
y_test2[batch_idx]=target.cpu().detach().numpy()
print(sum(X_test_feature[batch_idx]==X_test2_feature[batch_idx]))
print(sum(y_test[batch_idx]==y_test2[batch_idx]))
print(torch.sum(d==data))
# checking parameters
for a,b in zip(model1.parameters(),model2.parameters()):
print(torch.sum(a!=b))
Expect to get the same forward results from model1 and model2, but they are different. And If they produce different forward results, why do they have exactly the same parameters?
Have you taken into account changes that might occur to BatchNorm layers?
Batch norm layers do not behave like normal layers - their internal parameters are modified by computing running mean and std of the data, and not by gradient descent.
Try setting model1.eval() before the finetune and then check.

ML Engine Online Prediction - Unexpected tensor name: values

I get the following error when trying to make an online prediction on my ML Engine model.
The key "values" is not correct. (See error on image.)
enter image description here
I already tested with RAW image data : {"image_bytes":{"b64": base64.b64encode(jpeg_data)}}
& Converted the data to a numpy array.
Currently I have the following code:
from googleapiclient import discovery
import base64
import os
from PIL import Image
import json
import numpy as np
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/Users/jacob/Desktop/******"
def predict_json(project, model, instances, version=None):
"""Send json data to a deployed model for prediction.
Args:
project (str): project where the Cloud ML Engine Model is deployed.
model (str): model name.
instances ([Mapping[str: Any]]): Keys should be the names of Tensors
your deployed model expects as inputs. Values should be datatypes
convertible to Tensors, or (potentially nested) lists of datatypes
convertible to tensors.
version: str, version of the model to target.
Returns:
Mapping[str: any]: dictionary of prediction results defined by the
model.
"""
# Create the ML Engine service object.
# To authenticate set the environment variable
# GOOGLE_APPLICATION_CREDENTIALS=<path_to_service_account_file>
service = discovery.build('ml', 'v1')
name = 'projects/{}/models/{}'.format(project, model)
if version is not None:
name += '/versions/{}'.format(version)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
return response['predictions']
savepath = 'upload/11277229_F.jpg'
img = Image.open('test/01011000/11277229_F.jpg')
test = img.resize((299, 299))
test.save(savepath)
img1 = open(savepath, "rb").read()
def load_image(filename):
with open(filename) as f:
return np.array(f.read())
predict_json('image-recognition-25***08', 'm500_200_waug', [{"values": str(base64.b64encode(img1).decode("utf-8")), "key": '87'}], 'v1')
The error message itself indicates (as you point out in the question), that the key "values" is not one of the inputs specified in the model. To inspect the model's input, use saved_model_cli show --all --dir=/path/to/model. That will show you a list of the names of the inputs. You'll need to use the correct name.
That said, it appears there is another issue. It's not clear from the question what type of input your model is expecting, though it's likely one of two things:
A matrix of integers or floats
A byte string with the raw image file
contents.
The exact solution will depend on which of the above your exported model is using. saved_model_cli will help here, based on the type and shape of the input. It will either be DT_FLOAT32 (or some other int/float type) and [NONE, 299, 299, CHANNELS] or DT_STRING and [NONE], respectively.
If your model is type (1), then you will need to send a matrix of ints/floats (which does not use base64 encoding):
predict_json('image-recognition-25***08', 'm500_200_waug', [{CORRECT_INPUT_NAME: load_image(savepath).tolist(), "key": '87'}], 'v1')
Note the use of tolist to convert the numpy array to a list of lists.
In the case of type (2), you need to tell the service you have some base64 data by adding in {"b64": ...}:
predict_json('image-recognition-25***08', 'm500_200_waug', [{CORRECT_INPUT_NAME: {"b64": str(base64.b64encode(img1).decode("utf-8"))}, "key": '87'}], 'v1')
All of this, of course, depends on using the correct name for CORRECT_INPUT_NAME.
One final note, I'm assuming your model actually does have key as an additional inputs since you included it in your request; again, that can all be verified against the output of saved_model_cli show.
I used to get this errors too. If anyone comes across this error, and using gcloud.
Tensors are automatically called csv_rows. For example this works for me now
"instances": [{
"csv_row": "STRING,7,4.02611534,9,14,0.66700000,0.17600000,0.00000000,0.00000000,1299.76500000,57",
"key": "0"
}]

How to restrict model predicted value within range?

I want to do linear regression with aws sagemaker. Where i have trained my model with some values and it's predicting values as per inputs. but sometimes it predicts value out of range as in i am predicting percentage which can't go less than 0 and more than 100. how can i restrict it here:
sess = sagemaker.Session()
linear =
sagemaker.estimator.Estimator(containers[boto3.Session().region_name],
role,
train_instance_count=1,
train_instance_type='ml.c4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sess)
linear.set_hyperparameters(feature_dim=5,
mini_batch_size=100,
predictor_type='regressor',
epochs=10,
num_models=32,
loss='absolute_loss')
linear.fit({'train': s3_train_data, 'validation': s3_validation_data})
how can i make my model not to predict values out of range : [0,100].
Yes you can. You can implement the output_fn to "brick wall" your output. SageMaker would call the output_fn after the model returns the value to do any post-processing of the result.
This can be done by creating a separate python file, specify the output_fn method there.
Provide this python file when instantiating your Estimator.
something like
sess = sagemaker.Session()
linear =
sagemaker.estimator.Estimator(containers[boto3.Session().region_name],
role,
train_instance_count=1,
train_instance_type='ml.c4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sess)
linear.set_hyperparameters(feature_dim=5,
mini_batch_size=100,
predictor_type='regressor',
epochs=10,
num_models=32,
loss='absolute_loss',
entry_point = 'entry.py'
)
linear.fit({'train': s3_train_data, 'validation': s3_validation_data})
Your entry.py could look something like
def output_fn(data, accepts):
"""
Args:
data: A result from TensorFlow Serving
accepts: The Amazon SageMaker InvokeEndpoint Accept value. The content type the response object should be
serialized to.
Returns:
object: The serialized object that will be send to back to the client.
"""
Implement the logic to "brick wall" here.
return data.outputs['outputs'].string_val

How to save features into a file in keras?

I have trained weight matrix, I would like to extract features at each end every layer and store them in a file. How could I do that? Thanks.
Have a look at the Keras FAQ
One simple way is to create a new Model that will output the layers
that you are interested in:
from keras.models import Model
model = ... # create the original model
layer_name = 'my_layer'
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model.predict(data)
Alternatively, you can build a Keras function that will return the
output of a certain layer given a certain input, for example:
from keras import backend as K
get_3rd_layer_output = K.function([model.layers[0].input],
[model.layers[3].output])
layer_output = get_3rd_layer_output([X])[0]
Similarly, you could build a Theano and TensorFlow function directly.
Note that if your model has a different behavior in training and
testing phase (e.g. if it uses Dropout, BatchNormalization, etc.),
you will need to pass the learning phase flag to your function:
get_3rd_layer_output = K.function([model.layers[0].input, K.learning_phase()],
[model.layers[3].output])
# output in test mode = 0
layer_output = get_3rd_layer_output([X, 0])[0]
# output in train mode = 1
layer_output = get_3rd_layer_output([X, 1])[0]
Then you just need to store your predictions in a file using e.g. np.save('filename.npz',intermediate_output )

How do I convert a CloudML Alpha model to a SavedModel?

In the alpha release of CloudML's online prediction service, the format for exporting model was:
inputs = {"x": x, "y_bytes": y}
g.add_to_collection("inputs", json.dumps(inputs))
outputs = {"a": a, "b_bytes": b}
g.add_to_collection("outputs", json.dumps(outputs))
I would like to convert this to a SavedModel without retraining my model. How can I do that?
We can convert this to a SavedModel by importing the old model, creating the Signatures, and re-exporting it. This code is untested, but something like this should work:
import json
import tensorflow as tf
from tensorflow.contrib.session_bundle import session_bundle
# Import the "old" model
session, _ = session_bundle.load_session_bundle_from_path(export_dir)
# Define the inputs and the outputs for the SavedModel
old_inputs = json.loads(tf.get_collection('inputs'))
inputs = {name: tf.saved_model.utils.build_tensor_info(tensor)
for name, tensor in old_inputs}
old_outputs = json.loads(tf.get_collection('outputs'))
outputs = {name: tf.saved_model.utils.build_tensor_info(tensor)
for name, tensor in old_outputs}
signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs=inputs,
outputs=outputs,
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
)
# Save out the converted model
b = builder.SavedModelBuilder(new_export_dir)
b.add_meta_graph_and_variables(session,
[tf.saved_model.tag_constants.SERVING],
signature_def_map={'serving_default': signature})
b.save()