Torchscript/C++ jit::trace model - Accessing layers parameters - c++

I have a model I trained in python, traced using torch.jit.trace, and load into C++ using torch::jit::load.
Is there a way to access the last layer to pull the value for the models required output depth (for example, if it is a Conv2D layer going from 16 -> 2, I want to predefine a tensor for a depth [b,d->2,x,y] of 2)?

Not the most elegant way of solving this, but the most straightforward was just passing a dummy tensor through and accessing the shape. Another way I did try was accessing the parameter list and looking for "softmax", unfortunately I couldn't guarantee everyones model will spell it the same way when searching for this. If someone else has a good answer for this feel free to share, but this will have to do for now.

Related

Any way to return the active constraints of a solved model in pyomo?

Using a concrete model that successfully solves. Is there any way to retrieve the ACTIVE constraints?
"Active" is a bit tricky, as that is technically an internal state of the solver (that most solvers don't report). You can get a reasonable approximation of active constraints with the method:
pyomo.util.infeasible.log_close_to_bounds()
...or look at what that method is doing (it's only 40 lines of code) and implement something specific to your use case.

Is it possible to upload an initial solution and check if it is feasible

I would like to know, that after building a pyomo model, if it is possible to send to the model an arbitrary solution and make it check if it is feasible. If yes return true and return false if the uploaded solution is infeasible
I expect to receive a true/false result depending on the feasibility of the uploaded solution
There is no general utility for checking if a model is feasible at an arbitrary point but something very close to what you want can be found here: https://github.com/Pyomo/pyomo/blob/master/pyomo/contrib/gdpopt/util.py#L176
You could implement your own is_feasible function by copying the code that loops over the constraints and variables.
One verbose but effective solution is to fix all your variables to their current value.
There are multiple ways to iterate over all model components of a certain type, you can iterate over variables and call model.var_name.fix().
I haven't tested this but it should work.

How to save and restore a tf.estimator.Estimator model with export_savedmodel?

I started using Tensorflow recently and I try to get use to tf.estimator.Estimator objects. I would like to do something a priori quite natural: after having trained my classifier, i.e. an instance of tf.estimator.Estimator (with the train method), I would like to save it in a file (whatever the extension) and then reload it later to predict the labels for some new data. Since the official documentation recommends to use Estimator APIs, I guess something as important as that should be implemented and documented.
I saw on some other page that the method to do that is export_savedmodel (see the official documentation) but I simply don't understand the documentation. There is no explanation of how to use this method. What is the argument serving_input_fn? I never encountered it in the Creating Custom Estimators tutorial or in any of the tutorials that I read. By doing some googling, I discovered that around a year ago the estimators where defined using an other class (tf.contrib.learn.Estimator) and it looks like the tf.estimator.Estimator is reusing some of the previous APIs. But I don't find clear explanations in the documentation about it.
Could someone please give me a toy example? Or explain me how to define/find this serving_input_fn?
And then how would be load the trained classifier again?
Thank you for your help!
Edit: I discovered that one doesn't necessarily need to use export_savemodel to save the model. It is actually done automatically. Then if we define later a new estimator having the same model_dir argument, it will also automatically restore the previous estimator, as explained here.
As you figured out, estimator automatically saves an restores the model for you during the training. export_savemodel might be useful if you want to deploy you model to the field (for example providing the best model for Tensorflow Serving).
Here is a simple example:
est.export_savedmodel(export_dir_base=FLAGS.export_dir, serving_input_receiver_fn=serving_input_fn)
def serving_input_fn():
inputs = {'features': tf.placeholder(tf.float32, [None, 128, 128, 3])}
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
Basically serving_input_fn is responsible for replacing dataset pipelines with a placeholder. In the deployment you can feed data to this placeholder as the input to your model for inference or prediction.

Softimage access parameters in C++

I'm a bit desperate here... I'm trying to access one parameter of a light in Softimage.
First, when we do this:
light.GetParameterValue(L"LightExponent")
it works!
But when we try:
light.GetParameterValue(L"soft_light.atten")
it fails completely!
I tried to find documentation, but the only code that I could find is in Python and no indication for the equivalent in C++. In python, they manage to do something like:
xsi = Application
test = xsi.GetValue("LightName.point.soft_light.atten")
But I cannot figure out what is Application, and it's not the same as XSI::Application in the API.
So, any idea how to access this value ? Also, if I could found the equivalent to Application.GetValue (in the script, you can see Application.SetValue... so I imagine that GetValue exists in some form!) in C++, that would be nice... I could simply use the name of the light and then add the information that I need to access that value like:
SomeUnknownClassForNow::GetValue(light.GetName() + ".point.soft_light.atten");
Any idea ?
With the help of a client of ours, I finally managed to find a proper solution to this.
First, there's some direct parameters, like "LightExponent". But there's other parameters associated with an object, like a light, in other categories called Shaders.
With a light, or a least a point light, there's only one Shader, called "soft_light". It's possible to access it by:
light.GetShaders()[0]
It's possible to verify its name to with GetName(). Which, in this case, would be "LightName.point.soft_light".
Finally, to access the "soft_light.atten" parameter:
light.GetShaders()[0].GetParameterValue("atten")
So, in Softimage, there's sort of Hierarchy in objects and all these a separated as shaders. For more complex object, just find the right shader and extract its parameter.

How to use Openlayer refresh strategy with django-olwidget?

I would like to have "realtime" like map.
My main question is:
How to use django-olwidget with openlayers OpenLayers.Strategy.Refresh?
Do I need to start back "from scratch" to use manually openlayers?
With django-olwidget, the data is on the web page so the args which define data-source, protocol.
My "second" question is about which format should I choose...
geoJSON? kml? other?
Can those formats contain openlayers point specific "style" specifications like:
{'graphic_name': 'square', 'point_radius': 10, 'fill_color': "#ABBAAB', 'stroke_color':'#BAABBA'}.
I already overriden the default map template olwidget/multi_layer_map.html to access my map object in JS. I think it should be rather simple to apply a js function on each data layers before passing it to the map.
Thanx in advance.
PS: I'm french speaker.
PS2: I asked this question as a feature request on github: https://github.com/yourcelf/olwidget/issues/89
If you're going to use regularly-refreshing data (without refreshing the page) and serialization formats like geoJSON and KML, django-olwidget won't help you very much out of the box. You might find it easier just to use OpenLayers from scratch.
But if you really wanted to use django-olwidget, here's what I would do:
Subclass olwidget.InfoLayer to create a new vector layer type that uses a network-native format like geoJSON or KML to acquire its data.
Add a corresponding python subclass to be able to use it with Django forms or whatever the use case is. You'll probably need to specify things like the URL from which the map will poll its data.
This is a lot of work beyond writing for OpenLayers directly. The advantages would be that you would get easy Django form integration with the same map.
As to which serialization format to use: I'm partial to JSON flavors over XML flavors such as KML, but it really doesn't matter much -- Django and OpenLayers both speak both fluently.
About the styling,you should take a look at the StyleMap[1] where you can set style properties according to attributes.
For the main question, I’m sorry I don’t know django-olwidget…
1 - http://openlayers.org/dev/examples/stylemap.html