Building models and estimators in tf2.0 without tf.keras - tensorflow-estimator

Given that layers API has been deprecated, how do I build models in tf2 without using tf.keras (or what is the recommended way to build models)? Issue #30829 has the same question, but was closed without any answers.
Update:
I'm okay with using tf.keras.layers instead of tf.layers, but once I've built all the layers and I need to return the model, is there a way to NOT use keras model, compile, fit, predict and evaluate, and just do it the tensorflow's way?
If you were wondering why I would want to do something like that, it is that I would like to use estimators to train, rather than keras' fit function. There exists a keras_model_to_estimator, but it seems it's not mature enough yet

Google released migration guide from TF 1 to TF 2, section Converting Models.
Recommended way to build models
Guide (section "Models based on tf.layers") recommends to convert tf.layers models to tf.keras.layers:
The conversion was one-to-one because there is a direct mapping from v1.layers to tf.keras.layers.
Build models without Keras
The option is to provide own layer implementations (example from the guide):
W = tf.Variable(tf.ones(shape=(2,2)), name="W")
b = tf.Variable(tf.zeros(shape=(2)), name="b")
#tf.function
def forward(x):
return W * x + b
out_a = forward([1,0])
print(out_a)
But it is worth to consider tf.keras.layers.Layer (example), which gives some degree of freedom, but integrate with rest of Keras (and it's layers).
Even with layers written with tf.keras, you are able to write own training loop (example).
To sum up, in practice TF 2.0 requires you to use tf.keras.

Related

How to add attributes in combination of object detection using YOLO?

I'm new to computer vision and I'm wondering how to deal with the following problem.
I'm using YOLO for real time objet detection task. However I'm dealing with a dataset that gives me also few attributes such has weather, temperature etc...
(I'm obviously able to acces to those informations in real time, to use them in real life).
My data has some big differences depending of the weather, temperature etc... that's why it's useful to have access to those informations.
So is there any way to learn on both image dataset associated to a context ? I'm looking for something that is YOLO compatible.
If a such thing isn't compatible/doesn't exists, I guess I'll just do different versions of the trained YOLO on specifics datasets associated to different context. Each specific version will be actived only for specific weather and temperature.
Thank you in advance for any kind of help/informations.
You will need to build you custom model that combines visual features with tabular data. This could look something like:
vis_feats = nn.Linear(512, 1) # visual features
tab_feats = nn.Linear(4, 1) # tab features
x = torch.cat((x, tab), dim=1) # x goes into your prediction layer

Training a Neural Network in Python and deploying in C++

I open this thread to discuss how to bring my NN model to deployment.
I build and trained a NN in Matlab with mdCNN, (mdCNN is a simple Matlab library for building NN for multiple dimension input, which is currently is not supported with Matlab - cov3x3x3). I trained my model in Matlab, Now I want to bring it to production.
After few hours of research, I plan to do the following
Train a NN model in Keras with TF backend. I choose Keras because I want to have backward compatibility with Matlab in the future.
Grab a tensorflow session from Keras model, there is an example how to do that here. Than Save the session in *.pd file
Load the NN model from openCV dnn model. there is a specific function that does that
cv::readNet()
Run the NN in C++ using OpenCV with
net.setInput(blob);
Mat prob = net.forward();
I want to check with you if this flow would really work. Are there any suggestions how to do the deployment better? Any suggestions or improvements for the flow ?
Maybe have a look at this question: Convert Keras model to C++
The general idea is to save the model in json and the weights in hdf5 and use this keras2cpp solution to convert it to C++.

Extracting MatConvnet model weights

I am currently developing an application for facial recognition.
The algorithms are implemented and trained using the MatConvnet library (http://www.vlfeat.org/matconvnet/). At the end, I have a Network (.mat file) which looks like that:
I would like to know if it were possible to extract the weights of the Network using its .mat file, write them in a XML file and read them with Caffe C++. I would like to reuse them in Caffe C++ in order to do some testing and hardware implementation. Is there an efficient and practical way to proceed so ?
Thank you for very much for your help.
The layer whose parameters you'd like to store, must be set as 'precious'. In net.var you can access the parameters and write them.
There is a conversion script that converts matconvnet models to caffe models here which you may find useful.
You can't use weights of the trained Network by matconvnet for caffe. You can merely import your model from matconvnet to caffe.(https://github.com/vlfeat/matconvnet/blob/4ce2871ec55f0d7deed1683eb5bd77a8a19a50cd/utils/import-caffe.py). But this script does not support all layers and you may have difficulties in employing it.
The best way is to define your caffe prototxt in python as the matconvnet model.

TSFRESH library for python is taking way too long to process

I came across the TSfresh library as a way to featurize time series data. The documentation is great, and it seems like the perfect fit for the project I am working on.
I wanted to implement the following code that was shared in the quick start section of the TFresh documentation. And it seems simple enough.
from tsfresh import extract_relevant_features
feature_filtered_direct=extract_relevant_features(result,y,column_id=0,column_sort=1)
My data included 400 000 rows of sensor data, with 6 sensors each for 15 different id's. I started running the code, and 17 hours later it still had not finished. I figured this might be too large of a data set to run through the relevant feature extractor, so I trimmed it down to 3000, and then further down to 300. None of these actions made the code run under an hour, and I just ended up shutting it down after an hour or so of waiting. I tried the standard feature extractor as well
extracted_features = extract_features(timeseries, column_id="id", column_sort="time")
Along with trying the example dataset that TSfresh presents on their quick start section. Which includes a dataset that is very similar to my orginal data, with about the same amount of data points as I reduced to.
Does anybody have any experience with this code? How would you go about making it work faster? I'm using Anaconda for python 2.7.
Update
It seems to be related to multiprocessing. Because I am on windows, using the multiprocess code requires to be protected by
if __name__ == "__main__":
main()
Once I added
if __name__ == "__main__":
extracted_features = extract_features(timeseries, column_id="id", column_sort="time")
To my code, the example data worked. I'm still having some issues with running the extract_relevant_features function and running the extract features module on my own data set. It seems as though it continues to run slowly. I have a feeling its related to the multiprocess freeze as well, but without any errors popping up its impossible to tell. Its taking me about 30 minutes to run to extract features on less than 1% of my dataset.
which version of tsfresh did you use? Which OS?
We are aware of the high computational costs of some feature calculators. There is less we can do about it. In the future we will implement some tricks like caching to increase the efficiency of tsfresh further.
Have you tried calculating only the basic features by using the MinimalFeatureExtractionSettings? It will only contain basic features such as Max, Min, Median and so on but should run way, way faster.
from tsfresh.feature_extraction import MinimalFeatureExtractionSettings
extracted_features = extract_features(timeseries, column_id="id", column_sort="time", feature_extraction_settings = MinimalFeatureExtractionSettings())
Also it is probably a good idea to install the latest version from the repo by pip install git+https://github.com/blue-yonder/tsfresh. We are actively developing it and the master should contain the newest and freshest version ;).
Syntax has changed slightly (see docs), the current approach would be:
from tsfresh.feature_extraction import EfficientFCParameters, MinimalFCParameters
extract_features(timeseries, column_id="id", column_sort="time", default_fc_parameters=MinimalFCParameters())
Or
extract_features(timeseries, column_id="id", column_sort="time", default_fc_parameters=EfficientFCParameters())
Since version 0.15.0 we have improved our bindings for Apache Spark and dask.
It is now possible to use the tsfresh feature extraction directly in your usual dask or Spark computation graph.
You can find the bindings in tsfresh.convenience.bindings with the documentation here. For example for dask, it would look something like this (assuming df is a dask.DataFrame, for example the robot failure dataframe from our example)
df = df.melt(id_vars=["id", "time"],
value_vars=["F_x", "F_y", "F_z", "T_x", "T_y", "T_z"],
var_name="kind", value_name="value")
df_grouped = df.groupby(["id", "kind"])
features = dask_feature_extraction_on_chunk(df_grouped, column_id="id", column_kind="kind",
column_sort="time", column_value="value",
default_fc_parameters=EfficientFCParameters())
# or any other parameter set
Using either dask or Spark (or anything alike) might help you with very large data - both for memory as well as speed (as you can distribute the work over multiple machines). Of course, we still support the usual distributors (docu) as before.
Additional to that, it is also possible to run tsfresh together with a task orchestration system, such as luigi. You can create a task to
* read in the data for only one id and kind
* extract the features
* write out the result to disk
and let luigi handle all the rest. You may find a possible implementation of this here on my blog.
I've found, at least on a multicore machine, that a better way to distribute extract_features calculation over independent subgroups (identified by the column_id value) is through joblib.Parallel with the Loky backend.
For example, you define your features extraction function on a single value of columnd_id and you apply it
from joblib import Parallel, delayed
def map_extract_features(df):
return extract_features(
timeseries_container=df,
default_fc_parameters=settings,
column_id="ID",
column_sort="DATE",
n_jobs=1,
disable_progressbar=True
).reset_index().rename({"index":"ID_CONTO"}, axis=1)
out = Parallel(n_jobs=cpu_count()-1)(
delayed(map_extract_features)(
my_dataframe[my_dataframe["ID"]==id]
) for id in tqdm(my_dataframe["ID"].unique())
)
This method takes way less memory than specifying column_id directly in the extract_features function.

Convert Keras model to TensorFlow protobuf

We're currently training various neural networks using Keras, which is ideal because it has a nice interface and is relatively easy to use, but we'd like to be able to apply them in our production environment.
Unfortunately the production environment is C++, so our plan is to:
Use the TensorFlow backend to save the model to a protobuf
Link our production code to TensorFlow, and then load in the protobuf
Unfortunately I don't know how to access the TensorFlow saving utilities from Keras, which normally saves to HDF5 and JSON. How do I save to protobuf?
In case you don't need to utilize a GPU in the environment you are deploying to, you could also use my library, called frugally-deep. It is available on GitHub and published under the MIT License: https://github.com/Dobiasd/frugally-deep
frugally-deep allows running forward passes on already-trained Keras models directly in C++ without the need to link against TensorFlow or any other backend.
This seems to be answered in "Keras as a simplified interface to TensorFlow: tutorial", posted on The Keras Blog by Francois Chollet.
In particular, section II, "Using Keras models with TensorFlow".
You can access TensorFlow backend by:
import keras.backend.tensorflow_backend as K
Then you can call any TensorFlow utility or function like:
K.tf.ConfigProto
Save your keras model as an HDF5 file.
You can then do the conversion with the following code:
from keras import backend as K
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io
weight_file_path = 'path to your keras model'
net_model = load_model(weight_file_path)
sess = K.get_session()
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), 'name of the output tensor')
graph_io.write_graph(constant_graph, 'output_folder_path', 'output.pb', as_text=False)
print('saved the constant graph (ready for inference) at: ', osp.join('output_folder_path', 'output.pb'))
Here is my sample code which handles multiple input and multiple output cases:
https://github.com/amir-abdi/keras_to_tensorflow
Make sure you change the learning phase of keras backend to store proper values of the layers (like dropout or batch normalization). Here is a discussion about it.