Tensorflow RNN slice error - python-2.7

I am attempting to create a multilayered RNN using LSTMs in tensorflow. I am using Tensorflow version 0.9.0 and python 2.7 on Ubuntu 14.04.
However, I keep getting the following error:
tensorflow.python.framework.errors.InvalidArgumentError: Expected begin[1] in [0, 2000], but got 4000
when I use
rnn_cell.MultiRNNCell([cell]*num_layers)
if num_layers is greater than 1.
My code:
size = 1000
config.forget_bias = 1
and config.num_layers = 3
cell = rnn_cell.LSTMCell(size,forget_bias=config.forget_bias)
cell_layers = rnn_cell.MultiRNNCell([cell]*config.num_layers)
I would also like to be able to switch to using GRU cells but this gives me the same error:
Expected begin[1] in [0, 1000], but got 2000
I have tried explicitly setting
num_proj = 1000
which also did not help.
Is this something to do with my use of concatenated states? As I have attempted to set
state_is_tuple=True
which gives:
`ValueError: Some cells return tuples of states, but the flag state_is_tuple is not set. State sizes are: [LSTMStateTuple(c=1000, h=1000), LSTMStateTuple(c=1000, h=1000), LSTMStateTuple(c=1000, h=1000)]`
Any help would be much appreciated!

I'm not sure why this worked but, I added in a dropout wrapper. i.e.
if Training:
cell = rnn_cell.DropoutWrapper(cell,output_keep_prob=config.keep_prob)
And now it works.
This works for both LSTM and GRU cells.

This problem is occurring because you have increased layer of your GRU cell but your initial vector is not doubled. If your initial_vector size is [batch_size,50].
Then initial_vector = tf.concat(1,[initial_vector]*num_layers)
Now input this to decoder as initial vector.

Related

Pyomo accessing solver status

I am solving a MINLP with Pyomo and Scip as an Solver.
After the problem is solved I automatically get this output:
SCIP Status : problem is solved [optimal solution found]
Solving Time (sec) : 13.00
Solving Nodes : 1845
Primal Bound : +5.21506487017157e+03 (26 solutions)
Dual Bound : +5.21506487017157e+03
Gap : 0.00 %
I want to save the values of this output. With the solving time it works like this:
results = opt.solve(instance)
Time = results.solver.time
However if I try the same with the Gap and the Nodes get an error:
AttributeError: Unknown attribute `gap' for object with type <class 'pyomo.opt.results.solver.SolverInformation'>
I have also tried using:
GAP = results.solution.gap
Which results in an output that says undefined.
Is there any other way to acess those two values.

Is it possible to output error on training data through the Experimenter module?

I am trying to build a learning curve which compares training and testing accuracy versus training set size in WEKA. The testing accuracy portion versus training set size is easily done (through LearningRateProducer), but what I can't figure out is how to get training accuracy results through the experimenter module in an automated way. Here is an example of the output I'm looking for. This result is from the simple CLI module after running IBk.
=== Error on training data ===
Correctly Classified Instances 4175 100 %
Incorrectly Classified Instances 0 0 %
Kappa statistic 1
Mean absolute error 0.0005
Root mean squared error 0.0012
Relative absolute error 0.717 %
Root relative squared error 0.6913 %
Total Number of Instances 4175
I could do this through simple CLI, but I have many experiments that I need to generate a learning curve for and I would prefer a less manual way. An experiment module solution would be most desirable.
Thanks,
B
I was able to get this information by installing the groovy console and using the following script:
data = (new weka.core.converters.ConverterUtils.DataSource("/Path/To/Arff")).getDataSet()
data.setClassIndex(data.numAttributes() - 1)
data.randomize(new Random(1))
classifier = new weka.classifiers.trees.J48()
println "|train|\t%acc_{train}\t%acc_{test}"
stepSize = data.numInstances() / 10
for (int i = stepSize; i < data.numInstances(); i += stepSize ) {
subset = new weka.core.Instances(data, 1, i)
classifier.buildClassifier(subset)
evaluationObject = new weka.classifiers.evaluation.Evaluation(subset)
evaluationObject.evaluateModel(classifier, subset)
testSubset = new weka.core.Instances(data, i + 1, data.numInstances() - (i + 1))
evaluationObjectTest = new weka.classifiers.evaluation.Evaluation(subset)
evaluationObjectTest.evaluateModel(classifier, testSubset)
credit to Eibe Frank: https://weka.8497.n7.nabble.com/How-to-generating-learning-curve-for-training-set-td41654.html
The solution is comparable to Experimenter. You can directly call classifiers through groovy code and batch them however you need.

Run tflite accuracy tool on official tensorflow resnet50 model

I have downloaded the official resnet50 model provided here: https://github.com/tensorflow/models/tree/master/official/resnet. I needed a tflite quantized version of this model and hence I converted the model to a tflite format as follows :
toco --output_file /tmp/resnet50_quant.tflite --saved_model_dir <path/to/saved_model_dir> --output_format TFLITE --quantize_weights QUANTIZE_WEIGHTS
After this, I thought I'd run the tflite accuracy tool to verify the accuracy of this model is still reasonable. Although it looks like I run into the following issue:
bazel run -c opt --copt=-march=native --cxxopt='--std=c++11' -- //tensorflow/contrib/lite/tools/accuracy/ilsvrc:imagenet_accuracy_eval --model_file=/tmp/resnet50_quant.tflite --ground_truth_images_path=<path/to/images> --ground_truth_labels=/tmp/validation_labels.txt --model_output_labels=/tmp/tf_labels.txt --output_file_path=/tmp/accuracy_output.txt --num_images=0
INFO: Analysed target //tensorflow/contrib/lite/tools/accuracy/ilsvrc:imagenet_accuracy_eval (0 packages loaded).
INFO: Found 1 target...
Target //tensorflow/contrib/lite/tools/accuracy/ilsvrc:imagenet_accuracy_eval up-to-date:
bazel-bin/tensorflow/contrib/lite/tools/accuracy/ilsvrc/imagenet_accuracy_eval
INFO: Elapsed time: 14.589s, Critical Path: 14.28s
INFO: 3 processes: 3 local.
INFO: Build completed successfully, 4 total actions
INFO: Running command line: bazel-bin/tensorflow/contrib/lite/tools/accuracy/ilsvrc/imagenet_accuracy_eval '--model_file=/tmp/resnet50_quant.tflite' '--ground_truth_images_path=<path/to/images>' '--ground_truth_labels=/tmp/validation_labels.txt' '--model_output_labels=/tmp/tf_labels.txt' '--output_file_path=/tmp/accuracy_output.txt' 'INFO: Build completed successfully, 4 total actions
2018-10-12 15:30:06.237058: E tensorflow/contrib/lite/tools/accuracy/ilsvrc/imagenet_accuracy_eval.cc:155] Starting evaluation with: 4 threads.
2018-10-12 15:30:06.536802: E tensorflow/contrib/lite/tools/accuracy/ilsvrc/imagenet_accuracy_eval.cc:98] Starting model evaluation: 50000
2018-10-12 15:30:06.565334: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at run_tflite_model_op.cc:89 : Invalid argument: Data shapes mismatch for tensors: 0 expected: [64,224,224,3] got: [1,224,224,3]
2018-10-12 15:30:06.565453: F tensorflow/contrib/lite/tools/accuracy/ilsvrc/imagenet_model_evaluator.cc:222] Non-OK-status: eval_pipeline->Run(CreateStringTensor(image_label.image), CreateStringTensor(image_label.label)) status: Invalid argument: Data shapes mismatch for tensors: 0 expected: [64,224,224,3] got: [1,224,224,3]
[[{{node stage_run_tfl_model_output}} = RunTFLiteModel[input_type=[DT_FLOAT], model_file_path="/tmp/resnet50_quant.tflite", output_type=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](stage_inception_preprocess_output)]]
It looks like the issue is that the official resnet model has an input tensor of [64, 224, 224, 3] whereas the accuracy tool provides an input of [1, 224, 224, 3]. So, the official model seems to expect a batch of 64 images and hence the accuracy tool fails.
I was wondering what I need to do to get the accuracy tool to run on the official resnet50 model? I'm guessing that although the input tensor for resnet 50 is [64, 224, 224, 3], there should be a way to still run a single image through the model.
There are two ways to go about it:
Resize the input of your model to [1, 224, 224, 3] and run the tool.
You could try looking at this and then modifying this file accordingly.
Alternatively modify the same tool so that it feeds in 64 images at a time instead of 1. You can look at the same code file I point to above and feed 64 at a time instead of 1.
If you're looking for long-term support, consider filing a feature request on Github where we can support batching.

Reaching limitations creating a gallery of plots

In a script I'm using, the code generates a figure where a number of subplots are generated. Usually it creates a rectangular grid of plots, but for it's current use, the horizontal parameter only has 1 value, and the vertical parameter has considerably more values than it has had previously. This is causing my program to crash while running, because (presumably) the vertical dimension is too large. The code that's causing the issue is:
#can't get past the first line here
self.fig1 = plt.figure('Title',figsize=(4.6*numXparams,2.3*numYparams))
self.gs = gridspec.GridSpec(numYparams,numXparams)
self.gs.update(left=0.03, right=0.97, top=0.9, bottom=0.1, wspace=0.5, hspace=0.5)
and then later in a nested for loop running over both params:
ax = plt.subplot(self.gs[par0, par1])
The error I'm getting is:
X Error of failed request: badAlloc (insufficient resources for operation)
Major opcode of failed request: 53 (X_CreatePixmap)
Serial number of failed request: 295
Current serial number in output stream: 296
My vertical parameter currently has 251 values in it, so I can see how 251*2.3 inches could lead to trouble. I added in the 2.3*numYparams because the plots were overlapping, but I don't know how to create the figure any smaller without changing how the plots are arranged in the figure. It is important for these plots to stay in a vertically oriented column.
There are a couple of errors in your code. Fixing them allowed me to generate the figure you are asking for.
# I needed the figsize keyword here
self.fig1 = plt.figure(figsize=(4.6*numXparams,2.3*numYparams))
# You had x and y switched around here
self.gs = gridspec.GridSpec(numYparams,numXparams)
self.gs.update(left=0.03, right=0.97, top=0.9, bottom=0.1, wspace=0.5, hspace=0.5)
# I ran this loop
for i in range(numYparams):
ax = fig1.add_subplot(gs[i, 0]) # note the y coord in the gridspec comes first
ax.text(0.5,0.5,i) # just an identifier
fig1.savefig('column.png',dpi=50) # had to drop the dpi, because you can't have a png that tall!
and this is the top and bottom of the output figure:
Admittedly, there was a lot of space above the first and below the last subplot, but you can fix that by playing with the figure dimensions or gs.update

OpenCV 3 SVM train with large Data

I recently switched from OpenCV 2.4.6 to 3.0.
My Code looks like this:
Ptr<ml::SVM> pSVM = ml::SVM::create();
pSVM->->setType(cv::ml::SVM::C_SVC);
pSVM->setKernel(cv::ml::SVM::LINEAR);
pSVM->->setC(1);
cv::Ptr<cv::ml::TrainData> TrainData = cv::ml::TrainData::create(TrainMatrix, cv::ml::ROW_SAMPLE, Labels);
//TrainMatrix is a cv::Mat with 35000 rows and 1900 cols and float values in it. One Feature per row.
//Labels is a std::vector<int> with 35000 Elements with 1 and -1 in it.
pSVM->trainAuto(TrainData, 10, cv::ml::SVM::getDefaultGrid(cv::ml::SVM::C), cv::ml::SVM::getDefaultGrid(cv::ml::SVM::GAMMA), cv::ml::SVM::getDefaultGrid(cv::ml::SVM::P),
cv::ml::SVM::getDefaultGrid(cv::ml::SVM::NU), cv::ml::SVM::getDefaultGrid(cv::ml::SVM::COEF), cv::ml::SVM::getDefaultGrid(cv::ml::SVM::DEGREE), false);
When my program reaches the trainAuto method is crashes and in the error message stands that it cannot allocate 524395968 bytes. This number seems a little bit high. Before the crash the program consumes about 400 MB in Debug Mode.
If I put a smaller matrix (about 500 rows) in the method everything runs normally.
Has anyone same problems and knows a solution to it?