gcloud ml-engine local predict fails
First I identified the required input.json structure with saved_model_cli show --all --dir saved_model/
Response:
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
dtype: DT_UINT8
shape: (-1, -1, -1, 3)
name: image_tensor:0
....
From this I formatted my input.json for gcloud ml-engine local predict as:
{"inputs": {"b64": "ENCODED"}}
...
Finally, I ran gcloud ml-engine local predict --model-dir saved_model/ --json-instances=PATH-TO-INPUTS.json
Response:
ERROR: (gcloud.ml-engine.local.predict) /usr/local/lib/python2.7/dist-packages/requests/__init__.py:83:
2018-10-03 09:20:06.598090: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
ERROR:root:Exception during running the graph: invalid literal for long() with base 10: '\xff\xd8\xff\xdb'
Traceback (most recent call last):
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/ml_engine/local_predict.py", line 172, in <module>
main()
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/ml_engine/local_predict.py", line 167, in main
signature_name=args.signature_name)
File "/usr/lib/google-cloud-sdk/lib/third_party/ml_sdk/cloud/ml/prediction/prediction_lib.py", line 106, in local_predict
predictions = model.predict(instances, signature_name=signature_name)
File "/usr/lib/google-cloud-sdk/lib/third_party/ml_sdk/cloud/ml/prediction/prediction_utils.py", line 233, in predict
preprocessed, stats=stats, **kwargs)
File "/usr/lib/google-cloud-sdk/lib/third_party/ml_sdk/cloud/ml/prediction/frameworks/tf_prediction_lib.py", line 350, in predict
"Exception during running the graph: " + str(e))
cloud.ml.prediction.prediction_utils.PredictionError: Failed to run the provided model: Exception during running the graph: invalid literal for long() with base 10: '\xff\xd8\xff\xdb' (Error code: 2)
Any help overcoming this obstacle would be great. I have not been able to identify a solution from research online thus far. Thank you!
The data in input.json does not match the shape of inputs['inputs']. You have not provided enough information about what the various dimensions represent, but I suspect this is NHWC (batch size x height x width x channels) encoding of an image.
I also suspect these are supposed to be raw pixel values. In which case, you should not be base64 encoding the values, i.e., you should send data like this:
{"inputs": [[[0, 0, 0], ... [0, 0, 0]]]}
That being said, you should consider sending the image as a byte string and decoding the image in the graph. More information about various approaches can be found here:
https://stackoverflow.com/a/46222990/1399222
It seems you are using "Raw Tensor Encoded as JSON" and I recommend "Compressed Imaged Data", or the slightly simpler "Tensors Packed as Byte Strings"
Related
So I installed pyomo, glpk, and ipopt with anaconda,
When I run the example code here: https://pyomo.readthedocs.io/en/stable/contributed_packages/mindtpy.html
from pyomo.environ import *
model = ConcreteModel()
model.x = Var(bounds=(1.0,10.0),initialize=5.0)
model.y = Var(within=Binary)
model.c1 = Constraint(expr=(model.x-3.0)**2 <= 50.0*(1-model.y))
model.c2 = Constraint(expr=model.x*log(model.x)+5.0 <= 50.0*(model.y))
model.objective = Objective(expr=model.x, sense=minimize)
SolverFactory('mindtpy').solve(model, mip_solver='glpk', nlp_solver='ipopt',tee=True)
model.objective.display()
model.display()
model.pprint()
I get the output that the binary variable has apparently become infeasible:
python minlpex.py
INFO: ---Starting MindtPy---
INFO: Original model has 2 constraints (2 nonlinear) and 0 disjunctions, with
2 variables, of which 1 are binary, 0 are integer, and 1 are continuous.
INFO: NLP 1: Solve relaxed integrality
INFO: NLP 1: OBJ: 1.0 LB: 1.0 UB: inf
INFO: ---MindtPy Master Iteration 0---
INFO: MIP 1: Solve master problem.
WARNING: Empty constraint block written in LP format - solver may error
Traceback (most recent call last):
File "minlpex.py", line 13, in <module>
op.SolverFactory('mindtpy').solve(model, mip_solver='glpk', nlp_solver='ipopt',tee=True)
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/contrib/mindtpy/MindtPy.py", line 370, in solve
MindtPy_iteration_loop(solve_data, config)
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/contrib/mindtpy/iterate.py", line 30, in MindtPy_iteration_loop
handle_master_mip_optimal(master_mip, solve_data, config)
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/contrib/mindtpy/mip_solve.py", line 62, in handle_master_mip_optimal
config)
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/contrib/gdpopt/util.py", line 199, in copy_var_list_values
v_to.set_value(value(v_from, exception=False))
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/core/base/var.py", line 173, in set_value
if valid or self._valid_value(val):
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/core/base/var.py", line 185, in _valid_value
"domain %s" % (val, type(val), self.domain))
ValueError: Numeric value `0.22709088987977885` (<class 'float'>) is not in domain Binary
So I was a little confused, since this was a code provided, I would not expect it to error like this. So I feel like I'm messing something up or I am missing some required library?
Thanks a lot.
Looks like something must be wrong with the conda pyomo install or ipopt install.
When I reinstalled using pip for ipopt and compiling pyomo from github source everything works fine.
I am trying to retrain the tensorflow object detection API with my own data
i have labelled my image with labelImg but when i am using the script create_pascal_tf_record.py which is included in the tensorflow/models/research, i got some errors and i dont really know why it happens
python object_detection/dataset_tools/create_pascal_tf_record.py --data_dir=/home/jim/Documents/tfAPI/workspace/training_cabbage/images/train/ --label_map_path=/home/jim/Documents/tfAPI/workspace/training_cabbage/annotations/label_map.pbtxt --output_path=/home/jim/Desktop/cabbage_pascal.record --set=train --annotations_dir=/home/jim/Documents/tfAPI/workspace/training_cabbage/images/train/ --year=merged
Traceback (most recent call last):
File "object_detection/dataset_tools/create_pascal_tf_record.py", line 185, in <module>
tf.app.run()
File "/home/jim/.virtualenvs/enrouteDeepDroneTF/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "object_detection/dataset_tools/create_pascal_tf_record.py", line 167, in main
examples_list = dataset_util.read_examples_list(examples_path)
File "/home/jim/Documents/tfAPI/models/research/object_detection/utils/dataset_util.py", line 59, in read_examples_list
lines = fid.readlines()
File "/home/jim/.virtualenvs/enrouteDeepDroneTF/local/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 188, in readlines
self._preread_check()
File "/home/jim/.virtualenvs/enrouteDeepDroneTF/local/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 85, in _preread_check
compat.as_bytes(self.__name), 1024 * 512, status)
File "/home/jim/.virtualenvs/enrouteDeepDroneTF/local/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: /home/jim/Documents/tfAPI/workspace/training_cabbage/images/train/VOC2007/ImageSets/Main/aeroplane_train.txt; No such file or directory
the train folder contains the xml and the jpg
the annotation folder contains my labelmap.pbtxt for my custom class
and i want to publish the TF record file on the desktop
it seems that it cant find a file in my images and annotations folder but i dont know why
If someone has idea, thank you in advance
This error happens because you use the code for PASCAL VOC, which requires certain data folders structure. Basically, you need to download and unpack VOCdevkit to make the script work. As user phd pointed you out, you need the file VOC2007/ImageSets/Main/aeroplane_train.txt.
I recommend you to write your own script for tfrecords creation, it's not difficult. You need just two key components:
Loop over your data that reads the images and annotations
A function that encodes the data into tf.train.Example. For that you can pretty much re-use the dict_to_tf_example
Inside the loop, having created the tf_example, pass it to TFRecordWriter:
writer.write(tf_example.SerializeToString())
OK for future references, this is how i add background images to the dataset allowing the model to train on it.
Functions used from: datitran/raccoon_dataset
Generate CSV file -> xml_to_csv.py
Generate TFRecord from CSV file -> generate_tfrecord.py
First Step - Creating XML file for it
Example of background image XML file
<annotation>
<folder>test/</folder>
<filename>XXXXXX.png</filename>
<path>your_path/test/XXXXXX.png</path>
<source>
<database>Unknown</database>
</source>
<size>
<width>640</width>
<height>640</height>
<depth>3</depth>
</size>
<segmented>0</segmented>
</annotation>
Basically you remove the entire <object> (i.e no annotation )
Second Step - Generate CSV file
Using the xml_to_csv.py I just add a little change, to consider the XML file that do not have any annotation (the background images) as so:
From the original:
https://github.com/datitran/raccoon_dataset/blob/93938849301895fb73909842ba04af9b602f677a/xml_to_csv.py#L12-L22
I add:
value = None
for member in root.findall('object'):
value = (root.find('filename').text,
int(root.find('size')[0].text),
int(root.find('size')[1].text),
member[0].text,
int(member[4][0].text),
int(member[4][1].text),
int(member[4][2].text),
int(member[4][3].text)
)
xml_list.append(value)
if value is None:
value = (root.find('filename').text,
int(root.find('size')[0].text),
int(root.find('size')[1].text),
'-1',
'-1',
'-1',
'-1',
'-1'
)
xml_list.append(value)
I'm just adding negative values to the coordinates of the bounding box if there is no in the XML file, which is the case for the background images, and it will be usefull when generating the TFRecords.
Third and Final Step - Generating the TFRecords
Now, when creating the TFRecords, if the the corresponding row/image has negative coordinates, i just add zero values to the record (before, this would not even be possible).
So from the original:
https://github.com/datitran/raccoon_dataset/blob/93938849301895fb73909842ba04af9b602f677a/generate_tfrecord.py#L60-L66
I add:
for index, row in group.object.iterrows():
if int(row['xmin']) > -1:
xmins.append(row['xmin'] / width)
xmaxs.append(row['xmax'] / width)
ymins.append(row['ymin'] / height)
ymaxs.append(row['ymax'] / height)
classes_text.append(row['class'].encode('utf8'))
classes.append(class_text_to_int(row['class']))
else:
xmins.append(0)
xmaxs.append(0)
ymins.append(0)
ymaxs.append(0)
classes_text.append('something'.encode('utf8')) # this doe not matter for the background
classes.append(5000)
To note that in the class_text (of the else statement), since for the background images there are no bounding boxes, you can replace the string with whatever you would like, for the background cases, this will not appear anywhere.
And lastly for the classes (of the else statement) you just need to add a number label that does not belong to neither of your own classes.
For those who are wondering, I've used this procedure many times, and currently works for my use cases.
Hope it helped in some way.
So I'm tackling this machine-learning problem (from a previous Kaggle competition for practice: https://www.kaggle.com/c/nyc-taxi-trip-duration) and I'm trying to use XGBoost but getting an error which I have no clue how to tackle. I searched on google and stack overflow but couldn't find anyone with a similar problem.
I'm using python 2.7 with the Spyder IDE through Anaconda and I'm on Windows 10. I did have some trouble installing the xgboost package so I won't completely erase the idea that it could be an installation error. However I'm also doing a Udemy course on ML and I was able to use xgboost just fine with a small dataset and I'm using the same functions.
Code
The code is pretty simple:
... import libraries
# import dataset
dataset = pd.read_csv('data/merged.csv')
y = dataset['trip_duration'].values
del dataset['trip_duration'], dataset["id"], dataset['distance']
X = dataset.values
# Split dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)
# fit XGBoost to training set
classifier = XGBClassifier()
classifier.fit(X_train, y_train)
Output
However it spits out the following error:
In [1]: classifier.fit(X_train, y_train)
Traceback (most recent call last):
File "<ipython-input-44-f44724590846>", line 1, in <module>
classifier.fit(X_train, y_train)
File "C:\Users\MortZ\Anaconda3\lib\site-packages\xgboost\sklearn.py", line 464, in fit
verbose_eval=verbose)
File "C:\Users\MortZ\Anaconda3\lib\site-packages\xgboost\training.py", line 204, in train
xgb_model=xgb_model, callbacks=callbacks)
File "C:\Users\MortZ\Anaconda3\lib\site-packages\xgboost\training.py", line 74, in _train_internal
bst.update(dtrain, i, obj)
File "C:\Users\MortZ\Anaconda3\lib\site-packages\xgboost\core.py", line 819, in update
_check_call(_LIB.XGBoosterUpdateOneIter(self.handle, iteration, dtrain.handle))
WindowsError: [Error -529697949] Windows Error 0xE06D7363
I don't really know how to interpret this so any help would be very appreciated.
Thanks in advance
MortZ
Well after struggling for a few days I managed to find a solution.
A friend of mine told xgboost is known to have problems with python 2.7 so I upgraded it to 3.6 This didn't entirely solve my problem but gave me a knew error:
OSError: [WinError 541541187] Windows Error 0x20474343
After some digging I found a solution to this. The fit function I was trying to use was the source of the problem (although it did work on a different dataset so I'm not entirely sure why..).
Solution
change
classifier = XGBClassifier()
classifier.fit(X_train, y_train)
to
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)
watchlist = [(dtrain, 'train'), (dtest, 'test')]
xgb_pars = {'min_child_weight': 1, 'eta': 0.5, 'colsample_bytree': 0.9,
'max_depth': 6, 'subsample': 0.9, 'lambda': 1., 'nthread': -1, 'booster' : 'gbtree', 'silent': 1, 'eval_metric': 'rmse', 'objective': 'reg:linear'}
model = xgb.train(xgb_pars, dtrain, 10, watchlist, early_stopping_rounds=2, maximize=False, verbose_eval=1)
print('Modeling RMSLE %.5f' % model.best_score)
I guess the error is because you are using XGBClassfier instead of XGBRegressor for a regression problem.
There are very little information about how to fine-tuning parameters and it really confuses me a lot about how to fine-tune a network in caffe2. Could anybody show me some codes about the fine-tuning part. Many thanks.
By the way, in the link:Food101 SqueezeNet Caffe2 number of iterations, it seems that the author has successfully fine-tuned the network.
add: Here are some codes of my train part,
train_model = cnn.CNNModelHelper(order="NCHW", name="train")
train_model.param_init_net.AppendNet(core.Net(init_net))
train_model.net.AppendNet(core.Net(predict_net))
train_model.param_init_net.RunAllOnGPU(gpu_id=0)
train_model.net.RunAllOnGPU(gpu_id=0)
workspace.RunNetOnce(train_model.param_init_net)
AddTrainingOperators(train_model, 'softmaxout', 'label')
AddBookkeepingOperators(train_model)
workspace.RunNetOnce(train_model.param_init_net)
data, label = AddInput(train_model, batch_size=3,
db=os.path.join(data_folder, 'toy_train.lmdb'),
db_type='lmdb')
workspace.FeedBlob('data', data)
workspace.FeedBlob('label', label)
workspace.CreateNet(train_model.net)
However, when i run the code, an error which warns
Traceback (most recent call last):
File "lenetForChineseFinetune.py", line 62, in <module>
workspace.FeedBlob('data', data)
File "/opt/caffe2/caffe2/local/caffe2/python/workspace.py", line 262, in FeedBlob
return C.feed_blob(name, arr)
RuntimeError: [enforce fail at pybind_state.cc:825] . Unexpected type of argument - only numpy array or string are supported for feeding
occured. How should i modify the codes?
I am using the sklearn 0.14 module in Python to create a decision tree. I was hoping to use the OneHotEncoder to convert some features into categorical features. According to the documentation, I should be able to provide an array of indices to indicate which features should be converted. However, trying the following code:
xs = [[64, 15230], [3, 67673], [16, 43678]]
encoder = preprocessing.OneHotEncoder(n_values='auto', categorical_features=[1], dtype=numpy.integer)
encoder.fit(xs)
I receive the following error:
Traceback (most recent call last): File
"C:\Users\sara\Documents\Shipping
Project\PythonSandbox\CarrierDecisionTree.py", line 35, in <module>
encoder.fit(xs) File "C:\Python27\lib\site-packages\sklearn\preprocessing\data.py", line
892, in fit
self.fit_transform(X) File "C:\Python27\lib\site-packages\sklearn\preprocessing\data.py", line
944, in fit_transform
self.categorical_features, copy=True) File "C:\Python27\lib\site-packages\sklearn\preprocessing\data.py", line
795, in _transform_selected
return sparse.hstack((X_sel, X_not_sel)) File "C:\Python27\lib\site-packages\scipy\sparse\construct.py", line 417,
in hstack
return bmat([blocks], format=format, dtype=dtype) File "C:\Python27\lib\site-packages\scipy\sparse\construct.py", line 532,
in bmat
dtype = upcast( *tuple([A.dtype for A in blocks[block_mask]]) ) File "C:\Python27\lib\site-packages\scipy\sparse\sputils.py", line 53,
in upcast
raise TypeError('no supported conversion for types: %r' % (args,)) TypeError: no supported conversion for types: (dtype('int32'),
dtype('S6'))
If instead, I provide the array [0, 1] to categorical_features, it works correctly and converts both features properly. The same correct behavior occurs with using 'all' to categorical_features. However, I only want the second feature converted and not the first. I understand I could do this manually by converting one feature at a time, but I was hoping to use all the beauty of OneHotEncoder as I will be using many more features later on.
Posting as an answer, for the record:
TypeError: no supported conversion for types: (dtype('int32'), dtype('S6'))
means something in the true xs (not the one shown in the code snippet) is a string: dtype('S6') is NumPy's length-six string type.