I have a trained a classic CNN(pre-trained mobile net) for image classification. I want to now use this model from c++. From my understanding, I need to create a library of the model, that can accept the input and return its outputs. I have the model saved in format .pb (SavedModel).
I have already tried, CppFlow, where the error shows that it can't read my model. I assume it's due to incompatibility with TF 2.0.
I have also got the command line interface of SavedModel working, but I don't know how to use it in my cpp application.
I want to know how I can build a library of my model and use this library such that it can make predictions on the fly. Any guidance will be helpful. Please let me know if any additional information is required.
One way of using keras model in C++ is to convert it to TensorFlow .pb format. I've just composed a script for doing this, down below.
Usage: python script.py keras_model.hdf5
It outputs tensorflow model as input file name appended by .pb.
Then you can use TF C++ api for reading model and doing inference. Nice detailed example of using image recognition model to label images in C++ TF is located here.
Another option - you may use Keras directly by calling Python API from C++, it is not that difficult, there is standalone python which is compiled statically meaning having no dll/shared libs dependencies at all hence python interpreter can be fully compiled into C++ single binary. There are also many libraries in Internet that help you to easily run Python from C++.
import sys, os
from keras import backend as K
from keras.models import load_model
import tensorflow as tf
def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
"""
Freezes the state of a session into a pruned computation graph.
Creates a new computation graph where variable nodes are replaced by
constants taking their current value in the session. The new graph will be
pruned so subgraphs that are not necessary to compute the requested
outputs are removed.
#param session The TensorFlow session to be frozen.
#param keep_var_names A list of variable names that should not be frozen,
or None to freeze all the variables in the graph.
#param output_names Names of the relevant graph outputs.
#param clear_devices Remove the device directives from the graph for better portability.
#return The frozen graph definition.
"""
from tensorflow.python.framework.graph_util import convert_variables_to_constants
graph = session.graph
with graph.as_default():
freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
output_names = output_names or []
output_names += [v.op.name for v in tf.global_variables()]
# Graph -> GraphDef ProtoBuf
input_graph_def = graph.as_graph_def()
if clear_devices:
for node in input_graph_def.node:
node.device = ""
frozen_graph = convert_variables_to_constants(session, input_graph_def,
output_names, freeze_var_names)
return frozen_graph
if len(sys.argv) <= 1:
print('Usage: python script.py keras_model.hdf5')
sys.exit(0)
else:
ifname = sys.argv[1]
model = load_model(ifname)
frozen_graph = freeze_session(
K.get_session(),
output_names = [out.op.name for out in model.outputs],
)
tf.io.write_graph(frozen_graph, os.path.dirname(ifname), ifname + '.pb', as_text = False)
There are standalone third-party libraries that can import keras model into c++ for inference without doing much of work from our side.
Examples are
Multithreaded library for image segmentation models such as U-Net etc. - https://github.com/upashu1/keras2cpp_multithreading_image_segmentation
A library that supports the most of the layers
https://github.com/Dobiasd/frugally-deep
Related
I'm trying to generate a trace plot of my model but it shows module 'pymc3' has no attribute 'traceplot' error. My code is:
with pm.Model() as our_first_model:
# a priori
theta = pm.Beta('theta', alpha=1, beta=1)
# likelihood
y = pm.Bernoulli('y', p=theta, observed=data)
#y = pm.Binomial('theta',n=n_experimentos, p=theta, observed=sum(datos))
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(1000, step=step, start=start)
burnin = 0 # no burnin
chain = trace[burnin:]
pm.traceplot(chain, lines={'theta':theta_real});
which then gives the following error:
AttributeError Traceback (most recent call last)
<ipython-input-8-40f97a342e0f> in <module>
1 burnin = 0 # no burnin
2 chain = trace[burnin:]
----> 3 pm.traceplot(chain, lines={'theta':theta_real});
AttributeError: module 'pymc3' has no attribute 'traceplot'
I'm on windows10 and I've downloaded pymc3 with pip since it was not included in anaconda that I've downloaded.
Since several versions ago, PyMC3 delegates plotting and stats to ArviZ, and the original plotting commands were kept as alias to ArviZ methods for convenience and ease of transition.
Latest PyMC3 release (3.11.0) is the first to not include the alias such as pm.traceplot. You have to use arviz.plot_trace which works with PyMC3 objects.
Extra notes unrelated to the question itself:
You are using pm.find_MAP to initialize the chain and you are manually setting the sampler to pm.Metropolis instead of allowing pm.sample to select its own defaults. There are reasons to do so and it's not intrinsically wrong but it is discourged, see PyMC3 FAQs.
PyMC3 is transitioning to using InferenceData as default output of pm.sample. I would recommend setting return_inferencedata=True in pm.sample for the following reasons: 1) ArviZ functions convert to this format under the hood, you will avoid this small overhead, 2) InferenceData has more capabilities than MultiTrace, 3) PyMC3 is transitioning to InferenceData as the default output of pm.sample so why not get started already?
You have a # no burn-in comment, however, the trace returned by pm.sample has already had a burn-in performed of length the tune parameter passed to it. The default value of tune is 1000. To actually get all the samples and see how the MCMC slowly converges to the typical set, you need to use discard_tuned_samples=False.
Some InferenceData resources:
InferenceData overview: https://arviz-devs.github.io/arviz/getting_started/XarrayforArviZ.html
Working with InferenceData examples (shows how to perform burn-in among other things): https://arviz-devs.github.io/arviz/getting_started/WorkingWithInferenceData.html
I have trained SSD ResNet V1 model using Tensorflow 2 Object Detection API. Then I wanted to use this model with OpenCV in C++ code.
First of all, after training I had three files:
checkpoint
ckpt-101.data-00000-of-00001
ckpt-101.index
Note that I don't have .meta file because it wasn't generated.
Then I created SavedModel from these files using exporter_main_v2.py script that is in Object Detection API:
python3 exporter_main_v2.py input_type=image_tensor --pipeline_config_path /path/to/pipeline.config --trained_checkpoint_dir=/path/to/checkouts --output_directory=/path/to/output/directory
Having run this script I got saved_model.pb
I tried to use this file in OpenCV in such way:
cv::dnn::Net net = cv::dnn::readNetFromTensorflow("/path/to/saved_model.pb");
But I got the following error:
OpenCV(4.2.0) /home/andrew/opencv/modules/dnn/src/tensorflow/tf_io.cpp:42: error: (-2:Unspecified error) FAILED: ReadProtoFromBinaryFile(param_file, param). Failed to parse GraphDef file: /home/andrew/Documents/tensorflow_detection/workspace/pb_model/saved_model/saved_model.pb in function 'ReadTFNetParamsFromBinaryFileOrDie'
Then I tried to freeze saved_model.pb. But, as I understood, it is impossible in TF2.x because TF2.x doesn't support Sessions and Graphs. Also I don't have .pbtxt file.
My question: is it possible to use models trained with TF2 Object Detection API in OpenCV C++?
I will be grateful if you help me to solve this problems or give any useful advices.
It is possible to use Tensorflow 2 models with the Object Detection API and Opencv as said in the dedicated wiki : https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API
So far they are more models compatible with Tensorflow 1 but it should be okay for a SSD.
To freeze your graph you have to do :
import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
loaded = tf.saved_model.load('my_model')
infer = loaded.signatures['serving_default']
f = tf.function(infer).get_concrete_function(input_1=tf.TensorSpec(shape=[None, 224, 224, 3], dtype=tf.float32))
f2 = convert_variables_to_constants_v2(f)
graph_def = f2.graph.as_graph_def()
# Export frozen graph
with tf.io.gfile.GFile('frozen_graph.pb', 'wb') as f:
f.write(graph_def.SerializeToString())
As said in this comment in OpenCV Github issues : https://github.com/opencv/opencv/issues/16582#issuecomment-603819498
You will then probably need to use the tf_text_graph_ssd.py provided in OpenCV wiki to generate the text graph representation of the frozen model and that'd be it!
Tensorflow 2 no longer supports sessions so you can’t easily export your model as a frozen graph. I found this which solved the issues I had with using Tensorflow Object Detection models with opencv. Hopefully this will help.
Currently I am using the AnnotateVideo function to analyse videos. Is there any way to analyse only a section of a video, such as providing start_time and end_time as an argument to the function ?
gs_video_path ='gs://'+bucket_name+'/'+videodata.video.path+videodata.video.name
print(gs_video_path)
video_client = videointelligence.VideoIntelligenceServiceClient()
features = [videointelligence.enums.Feature.OBJECT_TRACKING]
operation = video_client.annotate_video(gs_video_path, features=features)
You can analyse only the sections you're interested in by providing a VideoContext with a VideoSegment list. Here is an example with a single 21..42s segment:
from google.cloud import videointelligence
from google.cloud.videointelligence import enums, types
video_client = videointelligence.VideoIntelligenceServiceClient()
gs_video_path = f'gs://{bucket_name}/{videodata.video.path}{videodata.video.name}'
features = [enums.Feature.OBJECT_TRACKING]
segment = types.VideoSegment()
segment.start_time_offset.FromSeconds(21)
segment.end_time_offset.FromSeconds(42)
context = types.VideoContext(segments=[segment])
operation = video_client.annotate_video(
gs_video_path,
features=features,
video_context=context)
If you want more examples, I recently wrote this tutorial: https://codelabs.developers.google.com/codelabs/cloud-video-intelligence-python3
What you can do is to analyse the full video and then retrieve the annotations for the specified times or frames, see this code.
If this doesn't fit your requirements because the videos are too long and you want to process only a specific part, I suggest you to use an external tool to cut the video locally and then perform the annotation of that fragment. For example you can use the following code to cut the video (there are many others).
from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_subclip
ffmpeg_extract_subclip("video1.mp4", start_time, end_time, targetname="test.mp4")
And then you will have to process the video from a local file
I'm trying to find out how I can modify the way a custom TensorFlow estimator creates event files for Tensorboard. Currently, I have the impression that, by default, a summary (containing the values of all the things (like typically accuracy) I'm following with tf.summary.scalar(...) ) is created every 100 steps in my model directory. The names of the event files later used by tensorboard look like
events.out.tfevents.1531418661.nameofmycomputer.
I found a routine online to change this behaviour and create directories for each run with the date and time of the computation, but it uses TensorFlow basic APIs:
logdir = "tensorboard/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") + "/"
writer = tf.summary.FileWriter(logdir, sess.graph)
Is it possible to do something similar with a TF custom estimator?
It is possible to specify a directory for each evaluation run using name argument of the evaluate method of tf.estimator.Estimator e.g.:
estimator = tf.estimator.Estimator(
model_fn=model_fn,
model_dir=model_dir
)
eval_results = estimator.evaluate(
input_fn=eval_input_fn,
name=eval_name
)
The event files for this evaluation will be saved in the directory inside model_dir named "eval_" + eval_name.
Summary Writers are not needed for TensorFlow Estimators. The summary log of the model is written to the designated folder location using the model_dir attribute of tf.Estimator function when the tf.Estimator.fit() method is called.
In the example below, the selected directory to store the training logs is './my_model'.
tf.estimator.DNNClassifier(
model_fn,
model_dir='./my_model',
config=None,
params=None,
warm_start_from=None
)
Launch TensorBoard by running tensorboard --logdir=./my_model from the terminal.
I'm trying to write a Python (2.7) library which loads certain classes at runtime. These classes contain a predefined set of methods.
My approach is to define a few Metaclasses which I work with in my library. I would for example define a "Navigation" Metaclass and work with it in the library. Then someone could write a class "Mainmenu" which contains some type of type definition that it is a "Navigation" plugin. And then the Library could use this class.
I am able to load modules and I'm able to write Metaclasses. My problem lies in combining these two things.
First there is the problem that I want the "plugin-classes" to be in a different (configurable) folder. So I can not do:
__metaclass__ = Navigation
because the Navigation class is part of my library and won't be there in the plugin-folder...
How could I solve the Problem of telling the type that the plugin is for? (Navigation, content.... e.g)
EDIT2: I solved the following problem. I found out that I can just ask the module to give me a dict.
My first problem still exists though
EDIT:
I managed registering and loading "normal" classes with a registry up until the following point:
from os import listdir
from os.path import isfile, join
import imp
class State:
registry = {}
def register_class(self,target_class):
self.registry[target_class.__name__] = target_class
print target_class.__name__+" registered!"
def create(self,classname):
tcls = self.registry[classname]
print self.registry[classname]
return tcls()
s = State()
mypath = """C:\metatest\plugins"""
files = [f for f in listdir(mypath) if isfile(join(mypath, f))]
for f in files:
spl = f.split(".")
if spl[1] == "py":
a = imp.load_source(spl[0], mypath + """\\""" + f)
s.register_class(a)
The problem I have at the end now is, that "a" is the loaded module, so it is a module-object. In my case there is only one class in it.
How can I get a Class object from the loaded module, so I can register the class properly??
So - let's check your problem steping back on your current proposal.
You need a way to have plug-ins for a larger system - the larger system won't know about the plug-ins at coding time - but the converse is not true: your plugins should be able to load modules, import base classes and call functions on your larger system.
Unless you really have something so plugable that plug-ins can work with more than one larger system. I doubt so, but if that is the case you need a framework that can register interfaces and retrieve classes and adapter-implementations between different classes. That framework is Zope Interface - you should read the documentation on it here: https://zopeinterface.readthedocs.io/en/latest/
Much more down to earth will be a plug-in system that sacans some preset directories for Python files and import those. As I said above, there is no problem if these files do import base classes (or metaclasses, for the record) on your main system: these are already imported by Python in the running process anyway, their import in the plug-in will just make then show up as available on the plug-in code.
You can use the exact code above, just add a short metaclass to register derived classes from State - you can maketeh convention that each base class for a different plug-in category have the registry attribute:
class RegistryMeta(type):
def __init__(cls, name, bases, namespace):
for base in cls.__mro__:
if 'registry' in base.__dict__:
if cls.__name__ in base.registry:
raise ValueError("Attempting to registrate plug-in with the name {} which is already taken".format(cls.__name__))
base.registry[cls.__name__] = cls
break
super(RegistryMeta, cls).__init__(name, base, namespace)
class State(object):
__metaclass__ = RegistryMeta
registry = {}
...
(keep the code for scanning the directory and loading modules - just switch all directory separation bars to "/" - you still are not doing it right and is subject to surprises by using "\")
and on the plug-in code include:
from mysystem import State
class YourClassCode(State):
...
And finally, as I said in the comment : you should really check the possibility of using Python 3.6 for that. Among other niceties, you could use the __init_subclass__ special method instead of needing a custom metaclass for keeping your registries.