I have to use a Tensorflow 2.X model with the OpenCV framework (v.4.X with C++).
To do this, I need a single .pb file or a .pb and a .pbtxt file, instead of a Tensorflow Saved Model like the one I have.
So my question is: Is there a way to convert a Saved Model in a format that OpenCV could read? Like, maybe, a caffe model?
I tried with MMdnn but it gives me a strange error:
Traceback (most recent call last):
File "/usr/local/bin/mmconvert", line 8, in <module>
sys.exit(_main())
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/_script/convert.py", line 102, in _main
ret = convertToIR._convert(ir_args)
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 62, in _convert
from mmdnn.conversion.tensorflow.tensorflow_parser import TensorflowParser
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/tensorflow/tensorflow_parser.py", line 15, in <module>
from tensorflow.tools.graph_transforms import TransformGraph
ImportError: No module named 'tensorflow.tools.graph_transforms'
And I suppose it is because it was developed and tested with Tensorflow 1.X.
Edit: I also have the relative Keras model (now that it is integrated with Tensorflow 2), but it is incompatible with OpenCV DNN framework too. Trying converting it with MMdnn I get this error:
Traceback (most recent call last):
File "/usr/local/bin/mmconvert", line 8, in <module>
sys.exit(_main())
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/_script/convert.py", line 102, in _main
ret = convertToIR._convert(ir_args)
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 46, in _convert
parser = Keras2Parser(model)
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/keras/keras2_parser.py", line 126, in __init__
model = self._load_model(model[0], model[1])
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/keras/keras2_parser.py", line 78, in _load_model
'DepthwiseConv2D': layers.DepthwiseConv2D})
File "/usr/local/lib/python3.5/dist-packages/keras/engine/saving.py", line 664, in model_from_json
return deserialize(config, custom_objects=custom_objects)
File "/usr/local/lib/python3.5/dist-packages/keras/layers/__init__.py", line 168, in deserialize
printable_module_name='layer')
File "/usr/local/lib/python3.5/dist-packages/keras/utils/generic_utils.py", line 147, in deserialize_keras_object
list(custom_objects.items())))
File "/usr/local/lib/python3.5/dist-packages/keras/engine/network.py", line 1056, in from_config
process_layer(layer_data)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/network.py", line 1042, in process_layer
custom_objects=custom_objects)
File "/usr/local/lib/python3.5/dist-packages/keras/layers/__init__.py", line 168, in deserialize
printable_module_name='layer')
File "/usr/local/lib/python3.5/dist-packages/keras/utils/generic_utils.py", line 149, in deserialize_keras_object
return cls.from_config(config['config'])
File "/usr/local/lib/python3.5/dist-packages/keras/engine/base_layer.py", line 1179, in from_config
return cls(**config)
File "/usr/local/lib/python3.5/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/keras/layers/convolutional.py", line 484, in __init__
**kwargs)
File "/usr/local/lib/python3.5/dist-packages/keras/layers/convolutional.py", line 117, in __init__
self.kernel_initializer = initializers.get(kernel_initializer)
File "/usr/local/lib/python3.5/dist-packages/keras/initializers.py", line 515, in get
return deserialize(identifier)
File "/usr/local/lib/python3.5/dist-packages/keras/initializers.py", line 510, in deserialize
printable_module_name='initializer')
File "/usr/local/lib/python3.5/dist-packages/keras/utils/generic_utils.py", line 140, in deserialize_keras_object
': ' + class_name)
ValueError: Unknown initializer: GlorotUniform
Edit 04/2021: Now the ONNX converter mentioned in the comments works properly with OpenCV 4.5.1 (Version 4.5.0 has a bug with some ONNX networks).
If you have the .h5 file, you can try this approach instead of MMdnn, using TensorFlow. The function converts the current session into a static computation graph to capture current states. Then you can write the graph in .pb format using tf.train.write_graph.
You can load the pretrained model with model = load_model('./model/keras_model.h5') before you freeze the graph. There is also a blog post for further explanation.
Related
I am doing hyperparameter tuning on GCP using this scikit docker image. When I add the aiplatform package as a dependency, things break. The error comes from the bigquery import.
from google.cloud import bigquery
The error message is below.
The replica workerpool0-0 exited with a non-zero status of 1.
Traceback (most recent call last):
[...]
File "/root/.local/lib/python3.7/site-packages/trainer/task.py", line 7, in
from google.cloud import storage, bigquery
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery/__init__.py", line 35, in
from google.cloud.bigquery.client import Client
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery/client.py", line 60, in
from google.cloud.bigquery import _pandas_helpers
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery/_pandas_helpers.py", line 40, in
from google.cloud.bigquery import schema
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery/schema.py", line 19, in
from google.cloud.bigquery_v2 import types
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery_v2/__init__.py", line 23, in
from google.cloud.bigquery_v2 import types
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery_v2/types.py", line 23, in
from google.cloud.bigquery_v2.proto import encryption_config_pb2
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery_v2/proto/encryption_config_pb2.py", line 64, in
file=DESCRIPTOR,
File "/root/.local/lib/python3.7/site-packages/google/protobuf/descriptor.py", line 560, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
From the logs, I can see the system is downloading google-cloud-aiplatform v1.17.0. According to the scikit docker image, google-cloud-storage v1.35.0 is installed, but google-cloud-aiplatform drags in v2.5.0.
I am thinking I need to downgrade google-cloud-aiplatform to a specific version. Anyone know which version or how to resolve this problem?
UPDATE: FWIW, if I downgrade google-cloud-aiplatform==1.15.1 then the problem above goes away. However, this problem below shows.
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/root/.local/lib/python3.7/site-packages/trainer/hpt.py", line 170, in
staging_bucket=f'{args.bucket_uri}'
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/initializer.py", line 138, in init
backing_tensorboard=experiment_tensorboard,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/metadata.py", line 235, in set_experiment
experiment_name=experiment, description=description
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/experiment_resources.py", line 247, in get_or_create
project=project, location=location, credentials=credentials
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/metadata_store.py", line 283, in ensure_default_metadata_store_exists
encryption_spec_key_name=encryption_key_spec_name,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/metadata_store.py", line 123, in get_or_create
credentials=credentials,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/metadata_store.py", line 241, in _get
credentials=credentials,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/metadata_store.py", line 73, in __init__
self._gca_resource = self._get_gca_resource(resource_name=metadata_store_name)
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/base.py", line 617, in _get_gca_resource
return getattr(self.api_client, self._getter_method)(
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/utils/__init__.py", line 425, in __getattr__
return getattr(self._clients[self._default_version], name)
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/utils/__init__.py", line 359, in __getattr__
client_info=self._client_info,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform_v1/services/metadata_service/client.py", line 547, in __init__
api_audience=client_options.api_audience,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform_v1/services/metadata_service/transports/grpc.py", line 190, in __init__
("grpc.max_receive_message_length", -1),
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform_v1/services/metadata_service/transports/grpc.py", line 241, in create_channel
**kwargs,
File "/root/.local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 318, in create_channel
default_host=default_host,
File "/root/.local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 239, in _create_composite_credentials
credentials, scopes=scopes, default_scopes=default_scopes
TypeError: with_scopes_if_required() got an unexpected keyword argument 'default_scopes'
I trained a SSDLite-MobilenetV2 model with Tensorflow using the provided documentation in the Object Detection Api. Then I exported the model by running the export_tflite_ssd_graph script. A pb and a pbtxt file were generated. Finally, I tried to convert the model to tflite format using the tflite_convert command. However, I got the following error:
Traceback (most recent call last):
File "/usr/local/bin/tflite_convert", line 11, in
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 412, in main
app.run(main=run_main, argv=sys.argv[:1])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 408, in run_main
_convert_model(tflite_flags)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 100, in _convert_model
converter = _get_toco_converter(flags)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 87, in _get_toco_converter
return converter_fn(**converter_kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/lite.py", line 340, in from_saved_model
output_arrays, tag_set, signature_key)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/convert_saved_model.py", line 239, in freeze_saved_model
meta_graph = get_meta_graph_def(saved_model_dir, tag_set)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/lite/python/convert_saved_model.py", line 61, in get_meta_graph_def
return loader.load(sess, tag_set, saved_model_dir)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/saved_model/loader_impl.py", line 197, in load
return loader.load(sess, tags, import_scope, **saver_kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/saved_model/loader_impl.py", line 350, in load
** saver_kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/saved_model/loader_impl.py", line 275, in load_graph
meta_graph_def = self.get_meta_graph_def_from_tags(tags)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/saved_model/loader_impl.py", line 251, in get_meta_graph_def_from_tags
" could not be found in SavedModel. To inspect available tag-sets in"
RuntimeError: MetaGraphDef associated with tags set(['serve']) could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: saved_model_cli
It seems that the conversion script did not include the SERVING tag constant. How can I fix this?
I am using tensorflow-gpu 1.12.0
I am trying to use the Stanford Neural Dependency Parser provided by nltk. The problem I'm having is that when I call st = nltk.parse.stanford.StanfordNeuralDependencyParser(), I get the following error:
>>> st = nltk.parse.stanford.StanfordNeuralDependencyParser()
Traceback (most recent call last):
File "C:\Users\<user>\Anaconda2\lib\site-packages\IPython\core\interactiveshell.py", line 2885, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-ca2dec4f3c1f>", line 1, in <module>
st = nltk.parse.stanford.StanfordNeuralDependencyParser()
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\parse\stanford.py", line 378, in __init__
super(StanfordNeuralDependencyParser, self).__init__(*args, **kwargs)
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\parse\stanford.py", line 51, in __init__
key=lambda model_name: re.match(self._JAR, model_name)
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\internals.py", line 714, in find_jar_iter
raise LookupError('\n\n%s\n%s\n%s' % (div, msg, div))
LookupError:
===========================================================================
NLTK was unable to find stanford-corenlp-(\d+)(\.(\d+))+\.jar! Set
the CLASSPATH environment variable.
For more information, on stanford-corenlp-(\d+)(\.(\d+))+\.jar, see:
<http://nlp.stanford.edu/software/lex-parser.shtml>
===========================================================================
But, when I run os.environ.get('CLASSPATH') I get the result
`C:\nltk_data\;C:\nltk_data\stanford\;C:\nltk_data\stanford\stanford-ner\`
I know that I have the corenlp jar file in C:\nltk_data\stanford\ so I run the following and end up with a slightly different error.
>>> st = nltk.parse.stanford.StanfordNeuralDependencyParser('C:\\nltk_data\\stanford\\')
Traceback (most recent call last):
File "C:\Users\<user>\Anaconda2\lib\site-packages\IPython\core\interactiveshell.py", line 2885, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-22-28d797d702d9>", line 1, in <module>
st = StanfordNeuralDependencyParser('C:\\nltk_data\\stanford\\')
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\parse\stanford.py", line 378, in __init__
super(StanfordNeuralDependencyParser, self).__init__(*args, **kwargs)
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\parse\stanford.py", line 51, in __init__
key=lambda model_name: re.match(self._JAR, model_name)
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\internals.py", line 635, in find_jar_iter
(name_pattern, path_to_jar))
LookupError: Could not find stanford-corenlp-(\d+)(\.(\d+))+\.jar jar file at C:\nltk_data\stanford\
I have downloaded the jar stanford-english-corenlp-2016-01-10-models.jar from the Stanford NLP website and also renamed it to stanford-corenlp-2016-01-10.jar to try and match the pattern but I was still end up with the same errors. I have also downloaded the Stanford Parser version 3.6.0 but it doesn't contain any corenlp files.
Is there any way to get this to work, or am I misunderstanding something?
I am using the Blessed library to build a simple terminal application.
My application builds upon the following simple example for a dumb editor: https://github.com/jquast/blessed/blob/master/bin/editor.py
Warning: the following steps will break your IPython, and I don't know how to fix it!
For the purposes of this question, I'll just use editor.py. Let's make a couple of changes to allow debugging:
1) import ipdb
2) put in ipdb.set_trace() on line 224
Run editor.py now: python editor.py. The following error should be produced:
Traceback (most recent call last):
File "editor.py", line 14, in <module>
from manager import Manager
File "/home/abcd/python_scripts/editor.py", line 25, in <module>
import ipdb
File "/usr/local/lib/python2.7/dist-packages/ipdb/__init__.py", line 7, in <module>
from ipdb.__main__ import set_trace, post_mortem, pm, run, runcall, runeval, launch_ipdb_on_exception
File "/usr/local/lib/python2.7/dist-packages/ipdb/__main__.py", line 47, in <module>
ipapp.initialize([])
File "<decorator-gen-110>", line 2, in initialize
File "/usr/lib/python2.7/dist-packages/IPython/config/application.py", line 92, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/IPython/terminal/ipapp.py", line 332, in initialize
self.init_shell()
File "/usr/lib/python2.7/dist-packages/IPython/terminal/ipapp.py", line 348, in init_shell
ipython_dir=self.ipython_dir, user_ns=self.user_ns)
File "/usr/lib/python2.7/dist-packages/IPython/config/configurable.py", line 354, in instance
inst = cls(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/IPython/terminal/interactiveshell.py", line 328, in __init__
**kwargs
File "/usr/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 483, in __init__
self.init_readline()
File "/usr/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 1843, in init_readline
self.readline_startup_hook = readline.set_startup_hook
AttributeError: 'module' object has no attribute 'set_startup_hook'
If you suspect this is an IPython bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at ipython-dev#scipy.org
You can print a more detailed traceback right now with "%tb", or use "%debug"
to interactively debug it.
Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
c.Application.verbose_crash=True
Now, whenever one runs IPython by executing the ipython command, this error will be produced:
Traceback (most recent call last):
File "/usr/bin/ipython", line 5, in <module>
start_ipython()
File "/usr/lib/python2.7/dist-packages/IPython/__init__.py", line 120, in start_ipython
return launch_new_instance(argv=argv, **kwargs)
File "/usr/lib/python2.7/dist-packages/IPython/config/application.py", line 564, in launch_instance
app.initialize(argv)
File "<decorator-gen-110>", line 2, in initialize
File "/usr/lib/python2.7/dist-packages/IPython/config/application.py", line 92, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/IPython/terminal/ipapp.py", line 332, in initialize
self.init_shell()
File "/usr/lib/python2.7/dist-packages/IPython/terminal/ipapp.py", line 348, in init_shell
ipython_dir=self.ipython_dir, user_ns=self.user_ns)
File "/usr/lib/python2.7/dist-packages/IPython/config/configurable.py", line 354, in instance
inst = cls(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/IPython/terminal/interactiveshell.py", line 328, in __init__
**kwargs
File "/usr/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 483, in __init__
self.init_readline()
File "/usr/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 1843, in init_readline
self.readline_startup_hook = readline.set_startup_hook
AttributeError: 'module' object has no attribute 'set_startup_hook'
If you suspect this is an IPython bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at ipython-dev#scipy.org
You can print a more detailed traceback right now with "%tb", or use "%debug"
to interactively debug it.
Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
c.Application.verbose_crash=True
So, IPython seems to be globally broken. I have gotten this issue on both Cygwin and Ubuntu.
What's going wrong?
I am using pandas v0.14.1 with python 2.7
I have a groupby object and I am trying to pull out a group identified by particular key. The key is in fact in the group:
>>> key in key_groups.groups.keys()
True
but when I try to make the get_group call it fails with a memory error:
>>>> key_groups.get_group(key)
*** MemoryError:
The full stacktrace is:
Traceback (most recent call last):
File "main.py", line 141, in <module>
main(num_days=arguments.days, num_variants=arguments.variants)
File "main.py", line 76, in main
problem, solution = Solver.Solve(request, num_variants)
File "/srv/compunctuator/src/Solver.py", line 49, in Solve
solution = attempt_minimization(t)
File "/srv/compunctuator/src/Solver.py", line 41, in attempt_minimization
t.scruple()
File "/srv/compunctuator/src/Compunctuator.py", line 136, in scruple
self.__iterate__()
File "/srv/compunctuator/src/Compunctuator.py", line 95, in __iterate__
self.__maximize_impressions__()
File "/srv/compunctuator/src/Compunctuator.py", line 583, in __maximize_impressions__
df = key_groups.get_group(key)
File "/srv/compunctuator/.virtualenvs/compunctuator/local/lib/python2.7/site-packages/pandas/core/groupby.py", line 573, in get_group
inds = self._get_index(name)
File "/srv/compunctuator/.virtualenvs/compunctuator/local/lib/python2.7/site-packages/pandas/core/groupby.py", line 429, in _get_index
sample = next(iter(self.indices))
File "/srv/compunctuator/.virtualenvs/compunctuator/local/lib/python2.7/site-packages/pandas/core/groupby.py", line 414, in indices
return self.grouper.indices
File "properties.pyx", line 34, in pandas.lib.cache_readonly.__get__ (pandas/lib.c:36380)
File "/srv/compunctuator/.virtualenvs/compunctuator/local/lib/python2.7/site-packages/pandas/core/groupby.py", line 1253, in indices
return _get_indices_dict(label_list, keys)
File "/srv/compunctuator/.virtualenvs/compunctuator/local/lib/python2.7/site-packages/pandas/core/groupby.py", line 3474, in _get_indices_dict
np.prod(shape))
File "algos.pyx", line 1997, in pandas.algos.groupsort_indexer (pandas/algos.c:37521) MemoryError
If I actually use the dictionary lookup I can get the indices out:
>>>> key_groups.groups[key]
[0, 2]
It seems like everything should work here.
I realize a similar question was asked here pandas get_group causes memory error
but it was never resolved and I thought I could give more details if necessary.