How can I print the output of a hidden layer in Lasagne - c++

I am trying to use lasgne to train a simple neural network, and to use my own C++ code to do inference. I use the weights generated by lasgne, but I am not able to get good results. Is there a way I can print the output of a hidden layer and/or the calculations themselves? I want to see who it works under the hood, so I can implement it the same way in C++.

I can help with Lasagne + Theano in Python, I am not sure from your question whether you fully work in C++ or you only need the results from Python + Lasagne in your C++ code.
Let's consider you have a simple network like this:
l_in = lasagne.layers.InputLayer(...)
l_in_drop = lasagne.layers.DropoutLayer(l_in, ...)
l_hid1 = lasagne.layers.DenseLayer(l_in_drop, ...)
l_out = lasagne.layers.DenseLayer(l_hid1, ...)
You can get the output of each layer by calling the get_output method on a specific layer:
lasagne.layers.get_output(l_in, deterministic=False) # this will just give you the input tensor
lasagne.layers.get_output(l_in_drop, deterministic=True)
lasagne.layers.get_output(l_hid1, deterministic=True)
lasagne.layers.get_output(l_out, deterministic=True)
When you are dealing with dropout and you are not in the training phase, it's important to remember to call get_output method with the deterministic parameter set to True, to avoid non-deterministic behaviours. This applies to all layers that are preceded by one or more dropout layers.
I hope this answers your question.

Related

How to add attributes in combination of object detection using YOLO?

I'm new to computer vision and I'm wondering how to deal with the following problem.
I'm using YOLO for real time objet detection task. However I'm dealing with a dataset that gives me also few attributes such has weather, temperature etc...
(I'm obviously able to acces to those informations in real time, to use them in real life).
My data has some big differences depending of the weather, temperature etc... that's why it's useful to have access to those informations.
So is there any way to learn on both image dataset associated to a context ? I'm looking for something that is YOLO compatible.
If a such thing isn't compatible/doesn't exists, I guess I'll just do different versions of the trained YOLO on specifics datasets associated to different context. Each specific version will be actived only for specific weather and temperature.
Thank you in advance for any kind of help/informations.
You will need to build you custom model that combines visual features with tabular data. This could look something like:
vis_feats = nn.Linear(512, 1) # visual features
tab_feats = nn.Linear(4, 1) # tab features
x = torch.cat((x, tab), dim=1) # x goes into your prediction layer

Building models and estimators in tf2.0 without tf.keras

Given that layers API has been deprecated, how do I build models in tf2 without using tf.keras (or what is the recommended way to build models)? Issue #30829 has the same question, but was closed without any answers.
Update:
I'm okay with using tf.keras.layers instead of tf.layers, but once I've built all the layers and I need to return the model, is there a way to NOT use keras model, compile, fit, predict and evaluate, and just do it the tensorflow's way?
If you were wondering why I would want to do something like that, it is that I would like to use estimators to train, rather than keras' fit function. There exists a keras_model_to_estimator, but it seems it's not mature enough yet
Google released migration guide from TF 1 to TF 2, section Converting Models.
Recommended way to build models
Guide (section "Models based on tf.layers") recommends to convert tf.layers models to tf.keras.layers:
The conversion was one-to-one because there is a direct mapping from v1.layers to tf.keras.layers.
Build models without Keras
The option is to provide own layer implementations (example from the guide):
W = tf.Variable(tf.ones(shape=(2,2)), name="W")
b = tf.Variable(tf.zeros(shape=(2)), name="b")
#tf.function
def forward(x):
return W * x + b
out_a = forward([1,0])
print(out_a)
But it is worth to consider tf.keras.layers.Layer (example), which gives some degree of freedom, but integrate with rest of Keras (and it's layers).
Even with layers written with tf.keras, you are able to write own training loop (example).
To sum up, in practice TF 2.0 requires you to use tf.keras.

TSFRESH library for python is taking way too long to process

I came across the TSfresh library as a way to featurize time series data. The documentation is great, and it seems like the perfect fit for the project I am working on.
I wanted to implement the following code that was shared in the quick start section of the TFresh documentation. And it seems simple enough.
from tsfresh import extract_relevant_features
feature_filtered_direct=extract_relevant_features(result,y,column_id=0,column_sort=1)
My data included 400 000 rows of sensor data, with 6 sensors each for 15 different id's. I started running the code, and 17 hours later it still had not finished. I figured this might be too large of a data set to run through the relevant feature extractor, so I trimmed it down to 3000, and then further down to 300. None of these actions made the code run under an hour, and I just ended up shutting it down after an hour or so of waiting. I tried the standard feature extractor as well
extracted_features = extract_features(timeseries, column_id="id", column_sort="time")
Along with trying the example dataset that TSfresh presents on their quick start section. Which includes a dataset that is very similar to my orginal data, with about the same amount of data points as I reduced to.
Does anybody have any experience with this code? How would you go about making it work faster? I'm using Anaconda for python 2.7.
Update
It seems to be related to multiprocessing. Because I am on windows, using the multiprocess code requires to be protected by
if __name__ == "__main__":
main()
Once I added
if __name__ == "__main__":
extracted_features = extract_features(timeseries, column_id="id", column_sort="time")
To my code, the example data worked. I'm still having some issues with running the extract_relevant_features function and running the extract features module on my own data set. It seems as though it continues to run slowly. I have a feeling its related to the multiprocess freeze as well, but without any errors popping up its impossible to tell. Its taking me about 30 minutes to run to extract features on less than 1% of my dataset.
which version of tsfresh did you use? Which OS?
We are aware of the high computational costs of some feature calculators. There is less we can do about it. In the future we will implement some tricks like caching to increase the efficiency of tsfresh further.
Have you tried calculating only the basic features by using the MinimalFeatureExtractionSettings? It will only contain basic features such as Max, Min, Median and so on but should run way, way faster.
from tsfresh.feature_extraction import MinimalFeatureExtractionSettings
extracted_features = extract_features(timeseries, column_id="id", column_sort="time", feature_extraction_settings = MinimalFeatureExtractionSettings())
Also it is probably a good idea to install the latest version from the repo by pip install git+https://github.com/blue-yonder/tsfresh. We are actively developing it and the master should contain the newest and freshest version ;).
Syntax has changed slightly (see docs), the current approach would be:
from tsfresh.feature_extraction import EfficientFCParameters, MinimalFCParameters
extract_features(timeseries, column_id="id", column_sort="time", default_fc_parameters=MinimalFCParameters())
Or
extract_features(timeseries, column_id="id", column_sort="time", default_fc_parameters=EfficientFCParameters())
Since version 0.15.0 we have improved our bindings for Apache Spark and dask.
It is now possible to use the tsfresh feature extraction directly in your usual dask or Spark computation graph.
You can find the bindings in tsfresh.convenience.bindings with the documentation here. For example for dask, it would look something like this (assuming df is a dask.DataFrame, for example the robot failure dataframe from our example)
df = df.melt(id_vars=["id", "time"],
value_vars=["F_x", "F_y", "F_z", "T_x", "T_y", "T_z"],
var_name="kind", value_name="value")
df_grouped = df.groupby(["id", "kind"])
features = dask_feature_extraction_on_chunk(df_grouped, column_id="id", column_kind="kind",
column_sort="time", column_value="value",
default_fc_parameters=EfficientFCParameters())
# or any other parameter set
Using either dask or Spark (or anything alike) might help you with very large data - both for memory as well as speed (as you can distribute the work over multiple machines). Of course, we still support the usual distributors (docu) as before.
Additional to that, it is also possible to run tsfresh together with a task orchestration system, such as luigi. You can create a task to
* read in the data for only one id and kind
* extract the features
* write out the result to disk
and let luigi handle all the rest. You may find a possible implementation of this here on my blog.
I've found, at least on a multicore machine, that a better way to distribute extract_features calculation over independent subgroups (identified by the column_id value) is through joblib.Parallel with the Loky backend.
For example, you define your features extraction function on a single value of columnd_id and you apply it
from joblib import Parallel, delayed
def map_extract_features(df):
return extract_features(
timeseries_container=df,
default_fc_parameters=settings,
column_id="ID",
column_sort="DATE",
n_jobs=1,
disable_progressbar=True
).reset_index().rename({"index":"ID_CONTO"}, axis=1)
out = Parallel(n_jobs=cpu_count()-1)(
delayed(map_extract_features)(
my_dataframe[my_dataframe["ID"]==id]
) for id in tqdm(my_dataframe["ID"].unique())
)
This method takes way less memory than specifying column_id directly in the extract_features function.

How to create basic neural network in Python/Pygame?

I am trying to understand how to create a basic, simple neural network in Python/Pygame. I have read this tutorial and my aim is to create a program similar to the program that is described in "AI junkie". Although this tutorial is pretty simple. I still don't fully get it, and I am not sure about the connection between the output of the neurons to the movement of the tanks. Where can I find a simple, basic code of a program like this written in pygame/python, to try improving my understanding of the implementation of the algorithm?
Thanking you in anticipation!
Check #Nathan's Python code in this post. It's pretty clean and also good to take as a start point.
If you want logistic activation:
def logistic(x):
return 1/(1+math.exp(-x))
# derivative of logistic
def dlogistic(y):
return y*(1-y)
The default activation function is tanh in the original code.
It's quite straightforward to construct a network and start training:
# create a network with 5 inputs, 20 hiddens, and one output nodes
n = NN(5, 20, 1)

parser: parsing formulas in template files

I will first describe the problem and then what I currently look at, in terms of libraries.
In my application, we have a set of variables that are always available. For example: TOTAL_ITEMS, PRICE, CONTRACTS, ETC (we have around 15 of them). A clients of the application would like to have certain calculations performed and displayed, using those variables. Up until now, I have been constantly adding those calculations to the app. It's pain in the butt, and I would like to make it more generic by way of creating a template, where the user can specify a set of formulas that the application will parse and calculate.
Here is one case:
total_cost = CONTRACTS*PRICE*TOTAL_ITEMS
So, want to do something like that for the user to define in the template file:
total_cost = CONTRACTS*PRICE*TOTAL_ITEMS and some meta-date, like screen to display it on. Hence they will be specifying the formula with a screen. And the file will contain many formulas of this nature.
Right now, I am looking at two libraies: Spirit and matheval
Would anyone make recommendations what's better for this task, as well as references, examples, links?
Please let me know if the question is unclear, and I will try to further clarify it .
Thanks,
Sasha
If you have a fixed number of variables it may be a bit overkill to invoke a parser. Though Spirit is cool and I've been wanting to use it in a project.
I would probably just tokenize the string, make a map of your variables keyed by name (assuming all your variables are ints):
map<const char*,int*> vars;
vars["CONTRACTS"] = &contracts;
...
Then use a simple postfix calculator function to do the actual math.
Edit:
Looking at MathEval, it seems to do exactly what you want; set variables and evaluate mathematical functions using those variables. I'm not sure why you would want to create a solution at the level of a syntax parser. Do you have any requirements that MathEval does not fulfill?
Looks like it shouldn't be too hard to generate a simple parser using yacc and bison and integrate it into your code.
I don't know about matheval, but boost::spirit can do that for you pretty efficiently : see there.
If you're into template metaprogramming, you may want to have a look into Boost::Proto, but it will take some time to get started using it.