I am writing a wrapper class that takes a generic graph with a special member "train_op" to manage the training, saving, and housekeeping of my model.
I wanted to cleanly keep track of the lifetime number of training steps like so:
with tf.control_dependencies([ step_add_one ]):
self.train_op=tf.identity(self.training_graph.train_op )
raise TypeError('Expected binary or unicode string, got %r'
e, is_training=True, inputs=None)
I think the rub here is that train_op is the return of tf.Optimizer.minimize(), so it is not a tensor per se, but an operation.
An obvious workaround would be to call tf.identity on the training_graph.loss, but I lose a bit of abstraction because I have to then handle the learning rate etc externally. Moreover, I feel like I'm missing something.
How can I best remedy this?
You can use tf.group(), which will work with operations and tensors.
For instance:
x = tf.Variable(1.)
loss = tf.square(x)
optimizer = tf.train.GradientDescentOptimizer(0.1)
train_op = optimizer.minimize(loss)
step = tf.Variable(0)
step_add_one = step.assign_add(1)
with tf.control_dependencies([step_add_one]):
train_op_2 = tf.group(train_op)
Now when you run train_op_2, the value of step will be incremented.
However, the best way to go (if you can modify the graph that created the graph) is to add a parameter global_step to the minimize function:
train_op = optimizer.minimize(loss, global_step=step)
Related
I have a working column generation algorithm in SCIP. Due to specific constraints that I include while generating columns, it might happen that the last pricing round determines that the root node is infeasible (by the Farkas pricer of course).
In case that happens, I would like to 1) relax those specific constraints, 2) resolve the LP, and 3) start pricing columns again.
So, I have created my own EventHandler class, catching the node infeasibility event:
SCIP_DECL_EVENTINITSOL(EventHandler::scip_initsol)
{
SCIP_CALL( SCIPcatchEvent(scip_, SCIP_EVENTTYPE_NODEINFEASIBLE, eventhdlr, NULL, NULL));
return SCIP_OKAY;
}
And, corresponding, the scip_exec virtual method:
SCIP_DECL_EVENTEXEC(EventHandler::scip_exec)
{
double cur_rhs = SCIPgetRhsLinear(scip_, *d_varConsInfo).c_primal_obj_cut);
SCIPchgRhsLinear (scip_, (*d_varConsInfo).c_primal_obj_cut, cur_rhs + DELTA);
return SCIP_OKAY;
}
Where (*d_varConsInfo).c_primal_obj_cut is the specific constraint to be changed, DELTA is a global parameter, and cur_rhs is the current right hand side of the specific constraint. This function is neately called after the node infeasibility proof, however, I do not know how to 'tell' scip that the LP should be resolved and possible new columns should be included. Can somebody help me out with this?
When the event handler catches the NODEINFEASIBLE event, it is already too late to change something about the infeasibility of the problem, the node processing is already finished. Additionally, you are not allowed to change the rhs of a constraint during the solving process (because this means that reductions done before would potentially be invalid).
I would suggest the following: if your Farkas pricing is not able to identify new columns to render the LP feasible again, the node will be declared to be infeasible in the following. Therefore, at the end of Farkas pricing (if you are at the root node), you could just price an auxiliary variable that you add to the constraint that you want to relax, with bounds corresponding to your DELTA. Note that you need to have marked the constraint to be modifiable when creating it. Then, since a variable was added, SCIP will trigger another pricing round.
maybe you should take a look into the PRICERFARKAS method of SCIP (https://scip.zib.de/doc/html/PRICER.php#PRICER_FUNDAMENTALCALLBACKS).
If the current LP relaxation is infeasible, it is the task of the
pricer to generate additional variables that can potentially render
the LP feasible again. In standard branch-and-price, these are
variables with positive Farkas values, and the PRICERFARKAS method
should identify those variables.
I'm using the program Maya to make a rather large project in python. I have numerous options that will be determined by a GUI and input by the user.
One example of an option is what dimensions to render at. However I did not make a GUI yet and am still in the testing faze.
What I ultimately want is a way to have variables be able to be looked up and used by various classes/methods within multiple modules. And also that there be a way that I can test all the code without having an actual GUI.
Should I directly pass all data to each method? My issue with this is if method foo relies on variable A, but method bar needs to call foo, it could get real annoying passing these variables to Foo from everywhere its called.
Another way I saw was passing all variables through to each class instance itself and using instance variables to access. But what if an option changes, then i'd have to put reload imports every time it runs.
For testing what I use now is a module that gets variables from a config file with the variables, and i import that module and use the instance variables throughout the script.
def __init__(self):
# Get and assign all instance variables.
options = config_section_map('Attrs', '%s\\ui_options.ini' %(data_path))
for k, v in options.items():
if v.lower() == 'none':
options[k] = None
self.check_all = int(options['check_all'])
self.control_group = options['control_group']
Does anyone have advice or can point me in the right direction dealing with getting/using ui variables?
If the options list is not overly long and won't change, you can simply set member variables in the class initializer, which makes the initialization easy for readers to understand:
class OptionData(object):
def __init___(self):
#set the options on startup
self.initial_path = "//network"
self.initial_name = "filename"
self.use_hdr = True
# ... etc
If you expect the initializations to change often you can split out the initial values into the constructor for the class:
class OptionData(object):
def __init___(self, path = "//network", name = "filename", hdr=True)
self.initial_path = path
self.initial_name = name
self.use_hdr = hdr
If you need to persist the data, you can fill out the class reading the cfg file as you're doing, or store it in some other way. Persisting makes things harder because you can't guarantee that the user won't open two Maya's at the same time, potentially changing the saved data in unpredictable ways. You can store per-file copies of the data using Maya's fileInfo.
In both of these cases I'd make the actual GUI take the data object (the OptionData or whatever you call yours) as an initializer. That way you can read and write the data from the GUI. Then have the actual functional code read the OptionData:
def perform_render(optiondata):
#.... etc
That way you can run a batch process without the gui at all and the functional code will be none the wiser. The GUI's only job is to be a custom editor for the data object and then to pass it on to the final function in a valid state.
I am using a sklearn.pipeline.Pipeline object for my clustering.
pipe = sklearn.pipeline.Pipeline([('transformer1': transformer1),
('transformer2': transformer2),
('clusterer': clusterer)])
Then I am evaluating the result by using the silhouette score.
sil = preprocessing.silhouette_score(X, y)
I'm wondering how I can get the X or the transformed data from the pipeline as it only returns the clusterer.fit_predict(X).
I understand that I can do this by just splitting the pipeline as
pipe = sklearn.pipeline.Pipeline([('transformer1': transformer1),
('transformer2': transformer2)])
X = pipe.fit_transform(data)
res = clusterer.fit_predict(X)
sil = preprocessing.silhouette_score(X, res)
but I would like to just do it all in one pipeline.
If you want to both fit and transform the data on intermediate steps of the pipeline then it makes no sense to reuse the same pipeline and better to use a new one as you specified, because calling fit() will forget all about previously learnt data.
However if you only want to transform() and see the intermediate data on an already fitted pipeline, then its possible by accessing the named_steps parameter.
new_pipe = sklearn.pipeline.Pipeline([('transformer1':
old_pipe.named_steps['transformer1']),
('transformer2':
old_pipe.named_steps['transformer2'])])
Or directly using the inner varible steps like:
transformer_steps = old_pipe.steps
new_pipe = sklearn.pipeline.Pipeline([('transformer1': transformer_steps[0]),
('transformer2': transformer_steps[1])])
And then calling the new_pipe.transform().
Update:
If you have version 0.18 or above, then you can set the non-required estimator inside the pipeline to None to get the result in same pipeline. Its discussed in this issue at scikit-learn github
Usage for above in your case:
pipe.set_params(clusterer=None)
pipe.transform(df)
But be aware to maybe store the fitted clusterer somewhere else to do so, else you need to fit the whole pipeline again when wanting to use that functionality.
I am trying to get my head around making this requirement as efficient as possible, because it is part of a combinatorial problem solver, so every little bit helps in the grand scheme of things.
Lets say I have a list of elements, in this case called transitions.
val possibleTransitions : List[Transition] = List[...] //coming from somewhere
I want to perform an (somewhat expensive) computation on each transition, to obtain another object, in this case called a State.
The natural way for me to do it is using a for-comprehension or a map. The former for me is more convenient because I want to filter out a few irrelevant State objects, such as those which were already processed earlier.
val newStates = for {
transition <- possibleTransitions
state <- computeExpensiveOperation(transition)
if (confirmNewState(state))
} yield state
State contains a value, lets call it value(), which indicates some kind of attractiveness of that state. If the value() is very low (attractive) I want to discard the rest of the list and use that. Since possibleTransitions could be a very long list (thousands), ideally I avoid doing that computeExpensiveOperation if for example the first State object already has the value() I want.
On the other hand, if I don't find any item with an attractive value() I want to keep all of them and add them to another list.
val newPending = pending ++ newStates
I was trying to use a Stream for this, to avoid computing all the values before processing them. If I use find() and I don't find the required item then I won't be able to get the items in the stream (since its use-once).
The only thing I can see possible at the moment is to use possibleItems.toStream() in the for-comprehension and create another collection, iterating through each item one by one until either I find the item (and discard the collection) or no (and use the collection with all items).
Am I missing some smarter more efficient way to do this?
I would use lazy views and convert them to a stream to cache the intermediate result, then you can get the information you need:
val newStates = for {
transition <- possibleTransitions.view
state <- computeExpensiveOperation(transition)
if (confirmNewState(state))
} yield state
val newStatesStream = newStates.toStream // cache results
val attractive = newStatesStream.find(isAttractive(_))
attractive match {
case Some(a) => // do whatever
case None => {
val newPending = pending ++ newStatesSteam
}
}
As the stream is lazy it will only be computed until the first element is found in the line with val attractive. If there is no attractive element the complete stream will be computed and cached and None will be returned.
When computing the new pending elements we can just append this stream to pending. (By the way: pending should probably be a Queue)
Suppose X is a raw, labeled (ie, with training labels) data set, and Process(X) returns a set of Y instances
that have been encoded with attributes and converted into a weka-friendly file like Y.arff.
Also suppose Process() has some 'leakage':
some instances Leak = X-Y can't be encoded consistently, and need
to get a default classification FOO. The training labels are also known for the Leak set.
My question is how I can best introduce instances from Leak into the
weka evaluation stream AFTER some classifier has been applied to the
subset Y, folding the Leak instances in with their default
classification label, before performing evaulation across the full set X? In code:
DataSource LeakSrc = new DataSource("leak.arff");
Instances Leak = LeakSrc.getDataSet();
DataSource Ysrc = new DataSource("Y.arff");
Instances Y = Ysrc.getDataSet();
classfr.buildClassifer(Y)
// YunionLeak = ??
eval.crossValidateModel(classfr, YunionLeak);
Maybe this is a specific example of folding together results
from multiple classifiers?
the bounty is closing, but Mark Hall, in another forum (
http://list.waikato.ac.nz/pipermail/wekalist/2015-November/065348.html) deserves what will have to count as the current answer:
You’ll need to implement building the classifier for the cross-validation
in your code. You can still use an evaluation object to compute stats for
your modified test folds though, because the stats it computes are all
additive. Instances.trainCV() and Instances.testCV() can be used to create
the folds:
http://weka.sourceforge.net/doc.stable/weka/core/Instances.html#trainCV(int,%20int,%20java.util.Random)
You can then call buildClassifier() to process each training fold, modify
the test fold to your hearts content, and then iterate over the instances
in the test fold while making use of either Evaluation.evaluateModelOnce()
or Evaluation.evaluateModelOnceAndRecordPrediction(). The later version is
useful if you need the area under the curve summary metrics (as these
require predictions to be retained).
http://weka.sourceforge.net/doc.stable/weka/classifiers/Evaluation.html#evaluateModelOnce(weka.classifiers.Classifier,%20weka.core.Instance)
http://weka.sourceforge.net/doc.stable/weka/classifiers/Evaluation.html#evaluateModelOnceAndRecordPrediction(weka.classifiers.Classifier,%20weka.core.Instance)
Depending on your classifier, it could be very easy! Weka has an interface called UpdateableClassifier, any class using this can be updated after it has been built! The following classes implement this interface:
HoeffdingTree
IBk
KStar
LWL
MultiClassClassifierUpdateable
NaiveBayesMultinomialText
NaiveBayesMultinomialUpdateable
NaiveBayesUpdateable
SGD
SGDText
It can then be updated something like the following:
ArffLoader loader = new ArffLoader();
loader.setFile(new File("/data/data.arff"));
Instances structure = loader.getStructure();
structure.setClassIndex(structure.numAttributes() - 1);
NaiveBayesUpdateable nb = new NaiveBayesUpdateable();
nb.buildClassifier(structure);
Instance current;
while ((current = loader.getNextInstance(structure)) != null) {
nb.updateClassifier(current);
}