I want to implement NiN using keras but I could not found useful in net. I want to implement below image architecture. anybody can help??
Just look at the functional API of Keras (https://keras.io/models/model/) and do something like this:
def build_model(input_layer, idx):
# model code (logits = first_layer(parameters)(input_layer)
# could also load an already trained model.
return logits
input_layer = Input(...)
output = input_layer
for i in range(num_models):
output = build_model(output , i)
final_layer = Model(input_layer, output)
Related
I have a requirement where I want to perform processing on sentences encoded by sentence_transformers model and then add some logic to get the final results and I want to do it on serverless inference endpoint.
The code should be something like this:
df = pd.read_csv(filename)
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
output = model.encode(df['query'])
vec_result.append(output)
cos_scores = util.pytorch_cos_sim(vec_result, vec_result)
final_result = postporcess(df, cos_scores)
return final_result
Is it doable using inference endpoint?
Let me explain my problem:
I have to update the code of a notebook that used version 1.x of sagemaker to make a batch prediction from an xgboost endpoint that has been generated in aws SageMaker.
After defining a dataframe called ordered_data, when trying to run this:
def batch_predict(data, xgb_predictor, rows=500):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predicates = ''
for array in split_array:
new_predictions = xgb_predictor.predictor.predict(array).decode('utf-8').
predictions = predictions + '\n' + predictions_new
predictions = predictions.replace('\n', ',')
predictions = predictions.replace(',,', ',')
return np.fromstring(predictions[1:], sep=',')
def get_predictions(sorted_data, xgb_predictor):
xgb_predictor.content_type = 'text/csv'.
xgb_predictor.serializer = csv_serializer
xgb_predictor.deserializer = None
#predictions = batch_predict(ordered_data.as_matrix(), xgb_predictor) # get the scores for each piece of data
predictions = batch_predict(ordered_data.values, xgb_predictor)
predictions = pd.DataFrame(predictions, columns=['score'])
return predictions
xgb_predictor = sagemaker.predictor.RealTimePredictor(endpoint_name='sagemaker-xgboost-2023-01-18')
predictions = get_predictions(sorted_data, xgb_predictor)
predictions2 = pd.concat([predictions, raw_data[[['order_id']]]], axis=1).
I've checked the documentation of sagemaker v2, and tried to update many things, and also I've run the code !sagemaker-upgrade-v2 --in-file file.ipynb --out-file file2.ipynb
but nothing works.
I get several errors like:
'content_type' property of object 'deprecated_class..DeprecatedClass' has no setter.
If I delete the line where I define content_type, I get: AttributeError: 'NoneType' object has no attribute 'ACCEPT'.
and so on.
I need to update all this code but I don't know how.
SageMaker RealTimePredictor class has serializer and deserializer parameters in Python SDK V2, behavior for serialization of input data and deserialization of result data can be configured through initializer arguments.
Note: The csv_serializer, json_serializer, npy_serializer, csv_deserializer, json_deserializer, and numpy_deserializer objects have been deprecated in v2
serializer=CSVSerializer(),
deserializer=JSONDeserializer()
I want to save a model in python using the SaveModelBuilder API and then load it in C++.
I was able to successfully implement the tensorflow label_image example, and load the downloaded model in c++. Although when I try to do the same with my own custom model it does not work. I then implemented Tom's solution to export my .pb file. Here I tried to load the model as shown in label_image example and also using Meta Graph, but it did not work.
My code to load Graph Def is
tensorflow::GraphDef graph_def;
Status load_graph_status = ReadBinaryProto(tensorflow::Env::Default(), graph_file_name, &graph_def);
session->reset(tensorflow::NewSession(tensorflow::SessionOptions()));
Status session_create_status = (*session)->Create(graph_def);
and to load Meta Graph Def is
tensorflow::MetaGraphDef graph_def;
Status load_graph_status = ReadBinaryProto(tensorflow::Env::Default(), graph_file_name, &graph_def);
session->reset(tensorflow::NewSession(tensorflow::SessionOptions()));
Status session_create_status = (*session)->Create(graph_def.graph_def());
I saved the model in python as per Tom's article as:
prediction_signature = ( tf.saved_model.signature_def_utils.build_signature_def(
inputs = {"inputs": model_input}, outputs = {"outputs":model_output}, method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
builder.add_meta_graph_and_variables(sess, [tf.saved_model.tag_constants.SERVING], signature_def_map= { tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature },)
builder.save()
When I run the code. I just get the error
Failed to load compute graph
when I have defined the graph as a normal graph, although if I define it as MetaGraph, I get the following error:
[libprotobuf ERROR external/protobuf_archive/src/google/protobuf/wire_format_lite.cc:611] String field 'tensorflow.NodeDef.op' contains invalid UTF-8 data when parsing a protocol buffer. Use the 'bytes' type if you intend to send raw bytes.
Note that the I am able to run the label_image example, so there no other issues like incorrect path to model or general syntax issues.
I want helping understanding and using:
SavedModelBundle bundle;
...
LoadSavedModel(session_options, run_options, export_dir, {kSavedModelTagTrain},
&bundle);
as instructed on the Save & Restore Documentation provided by tensorflow.
To conclude I want a way to save tensorflow model in python using SaveModelBuilder and load it in c++.
I have deployed a the universal_sentence_encoder_large_3 to an aws sagemaker. When I am attempting to predict with the deployed model I get Failed precondition: Table not initialized. as an error. I have included the part where I save my model below:
import tensorflow as tf
import tensorflow_hub as hub
import numpy as np
def tfhub_to_savedmodel(model_name, export_path):
model_path = '{}/{}/00000001'.format(export_path, model_name)
tfhub_uri = 'http://tfhub.dev/google/universal-sentence-encoder-large/3'
with tf.Session() as sess:
module = hub.Module(tfhub_uri)
sess.run([tf.global_variables_initializer(), tf.tables_initializer()])
input_params = module.get_input_info_dict()
dtype = input_params['text'].dtype
shape = input_params['text'].get_shape()
# define the model inputs
inputs = {'text': tf.placeholder(dtype, shape, 'text')}
output = module(inputs['text'])
outputs = {
'vector': output,
}
# export the model
tf.saved_model.simple_save(
sess,
model_path,
inputs=inputs,
outputs=outputs)
return model_path
I have seen other people ask this problem but no solution has been ever posted. It seems to be a common problem with tensorflow_hub sentence encoders
I was running into this exact issue earlier this week while trying to modify this example Sagemaker notebook. Particularly the part where serving the model. That is, running predictor.predict() on the Sagemaker Tensorflow Estimator.
The solution outlined in the issue worked perfectly for me- https://github.com/awslabs/amazon-sagemaker-examples/issues/773#issuecomment-509433290
I think it's just because tf.tables_initializer() only runs for training but it needs to be specified through the legacy_init_op if you want to run it during prediction.
I have trained weight matrix, I would like to extract features at each end every layer and store them in a file. How could I do that? Thanks.
Have a look at the Keras FAQ
One simple way is to create a new Model that will output the layers
that you are interested in:
from keras.models import Model
model = ... # create the original model
layer_name = 'my_layer'
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model.predict(data)
Alternatively, you can build a Keras function that will return the
output of a certain layer given a certain input, for example:
from keras import backend as K
get_3rd_layer_output = K.function([model.layers[0].input],
[model.layers[3].output])
layer_output = get_3rd_layer_output([X])[0]
Similarly, you could build a Theano and TensorFlow function directly.
Note that if your model has a different behavior in training and
testing phase (e.g. if it uses Dropout, BatchNormalization, etc.),
you will need to pass the learning phase flag to your function:
get_3rd_layer_output = K.function([model.layers[0].input, K.learning_phase()],
[model.layers[3].output])
# output in test mode = 0
layer_output = get_3rd_layer_output([X, 0])[0]
# output in train mode = 1
layer_output = get_3rd_layer_output([X, 1])[0]
Then you just need to store your predictions in a file using e.g. np.save('filename.npz',intermediate_output )