Error in TensorFlow program - python-2.7

I am learning TensorFlow and I stumble upon this example code for creating simple multi-layer sigmoid network. The program in the link is for MNIST database and hand written digit classification.
I want to train a network for regression task. I have 30 inputs(float) which is used to predict one output(float). So I tweaked the code to change the task from classification to regression.
My problem is that I'm getting an error in tf.Session.run(). The code and the error log is given below.
import test2
import tensorflow as tf
feed_input = test2.read_data_sets()
learning_rate = 0.001
training_epochs = 100
batch_size = 1716
display_step = 1
n_hidden_1 = 256
n_hidden_2 = 256
n_hidden_3 = 256
n_input = 30
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None])
def multilayer_perceptron(_X, _weights, _biases):
#Hidden layer with RELU activation
layer_1 = tf.nn.relu(tf.add(tf.matmul(_X, _weights['h1']), _biases['b1']))
#Hidden layer with RELU activationn_hidden_3
layer_2 = tf.nn.relu(tf.add(tf.matmul(layer_1, _weights['h2']), _biases['b2']))
layer_3 = tf.nn.relu(tf.add(tf.matmul(layer_2, _weights['h3']), _biases['b3']))
return tf.matmul(layer_3, weights['out']) + biases['out']
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3])),
'out': tf.Variable(tf.random_normal([n_hidden_3, 1]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'b3': tf.Variable(tf.random_normal([n_hidden_3])),
'out': tf.Variable(tf.random_normal([1]))
}
pred = multilayer_perceptron(x, weights, biases)
n_pred = tf.mul(pred, tf.convert_to_tensor(10000.00))
cost = tf.nn.sigmoid_cross_entropy_with_logits(n_pred, y)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0
total_batch = int(feed_input.train._num_examples / batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = feed_input.train.next_batch(batch_size)
# Fit training using batch data
sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys})
# Compute average loss
avg_cost += sess.run(cost, feed_dict={x: batch_xs, y: batch_ys}) / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print "Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost)
print "Optimization Finished!"
runfile('/mnt/sdb6/Projects/StockML/demo1.py',
wdir='/mnt/sdb6/Projects/StockML')
Reloaded modules: tensorflow.python.ops.nn_grad,
tensorflow.python.training.momentum,
. . . .
tensorflow.python.util.protobuf,
google.protobuf.internal.enum_type_wrapper,
tensorflow.python.ops.nn_ops, tensorflow.python,
tensorflow.python.platform.test,
google.protobuf.internal.api_implementation, tensorflow,
google.protobuf.internal.encoder
Traceback (most recent call last):
File "", line 1, in
runfile('/mnt/sdb6/Projects/StockML/demo1.py', wdir='/mnt/sdb6/Projects/StockML')
File
"/usr/lib/python2.7/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py",
line 685, in runfile
execfile(filename, namespace)
File
"/usr/lib/python2.7/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py",
line 78, in execfile
builtins.execfile(filename, *where)
File "/mnt/sdb6/Projects/StockML/demo1.py", line 69, in
sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys})
File
"/home/rammak/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py",
line 345, in run
results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
File
"/home/rammak/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py",
line 406, in _do_run
except tf_session.StatusNotOK as e:
AttributeError: 'module' object has no attribute 'StatusNotOK'

Protobuf error is usually an installation issue , run it in a virtual env
# On Mac:
$ sudo easy_install pip # If pip is not already installed
$ sudo pip install --upgrade virtualenv
Next, set up a new virtualenv environment. To set it up in the directory ~/tensorflow, run:
$ virtualenv --system-site-packages ~/tensorflow
$ cd ~/tensorflow
Then activate the virtualenv:
$ source bin/activate # If using bash
$ source bin/activate.csh # If using csh
(tensorflow)$ # Your prompt should change
Inside the virtualenv, install TensorFlow:
(tensorflow)$ pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
You can then run your TensorFlow program like:
(tensorflow)$ python tensorflow/models/image/mnist/convolutional.py
# When you are done using TensorFlow:
(tensorflow)$ deactivate # Deactivate the virtualenv
$ # Your prompt should change back

If you just begin to learn TensorFlow, I would suggest you trying out examples in TensorFlow/skflow first and then once you are more familiar with TensorFlow it would be fairly easy for you to insert TensorFlow code to build a custom model you want (there are also examples for this).
Hope those examples for images and text understanding could get you started and let us know if you encounter any issues! (post issues or tag skflow in SO).

Change your logging level from WARN to INFO, so that can get a better visualization of the error you're getting.
For knowledge purpose, you should know there are 5 logging levels:
DEBUG
INFO
WARN
ERROR
FATAL

Related

Using AWS Lambda docker image to add self module error

I want to build a docker image to add my deep learning model GRU into my lambda function.
Here is the GRUModel.py
import torch
import torch.nn as nn
class GRUModel(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim, dropout_prob):
super(GRUModel, self).__init__()
# Defining the number of layers and the nodes in each layer
self.layer_dim = layer_dim
self.hidden_dim = hidden_dim
# GRU layers
self.gru = nn.GRU(
input_dim, hidden_dim, layer_dim, batch_first=True, dropout=dropout_prob
)
# Fully connected layer
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
# Initializing hidden state for first input with zeros
h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_()
# Forward propagation by passing in the input and hidden state into the model
out, _ = self.gru(x, h0.detach())
# Reshaping the outputs in the shape of (batch_size, seq_length, hidden_size)
# so that it can fit into the fully connected layer
out = out[:, -1, :]
# Convert the final state to our desired output shape (batch_size,
output_dim)
out = self.fc(out)
return out
Here is the lambda.py
import torch
import torch.nn as nn
import joblib
from GRUModel import GRUModel
def handler(event, context):
gruencoder=joblib.load("gruencoder.pkl")
response = {'statusCode': 200, 'body' : "OK"}
return response
Here is the Dockerfile
FROM public.ecr.aws/lambda/python:3.7
COPY lambda.py ${LAMBDA_TASK_ROOT}
COPY gruencoder.pkl .
COPY GRUModel.py .
RUN pip3 install joblib --target "${LAMBDA_TASK_ROOT}"
RUN pip3 install torch --target "${LAMBDA_TASK_ROOT}"
CMD ["lambda.handler"]
I run the lambda.py is working on the loacl, but it shows error on the lambda.
{
"errorMessage": "module '__main__' has no attribute 'GRUModel'",
"errorType": "AttributeError",
"stackTrace": [
" File \"/var/task/lambda.py\", line 22, in handler\n gruencoder=joblib.load(\"gruencoder.pkl\")\n",
" File \"/var/task/joblib/numpy_pickle.py\", line 587, in load\n obj = _unpickle(fobj, filename, mmap_mode)\n",
" File \"/var/task/joblib/numpy_pickle.py\", line 506, in _unpickle\n obj = unpickler.load()\n",
" File \"/var/lang/lib/python3.7/pickle.py\", line 1088, in load\n dispatch[key[0]](self)\n",
" File \"/var/lang/lib/python3.7/pickle.py\", line 1376, in load_global\n klass = self.find_class(module, name)\n",
" File \"/var/lang/lib/python3.7/pickle.py\", line 1430, in find_class\n return getattr(sys.modules[module], name)\n"
]
}
I think you need to add the ${LAMBDA_TASK_ROOT} target to your COPY commands for the .pkl and missing .py file:
FROM public.ecr.aws/lambda/python:3.7
COPY lambda.py ${LAMBDA_TASK_ROOT}
COPY gruencoder.pkl ${LAMBDA_TASK_ROOT}
COPY GRUModel.py ${LAMBDA_TASK_ROOT}
RUN pip3 install joblib --target "${LAMBDA_TASK_ROOT}"
RUN pip3 install torch --target "${LAMBDA_TASK_ROOT}"
CMD ["lambda.handler"]

How to run moderngl in Colab?

I'm trying to run moderngl in Colab. I installed it and ran a virtual display:
!sudo apt-get update --fix-missing && apt-get -qqq install x11-utils > /dev/null
!sudo apt-get update --fix-missing && apt-get -qqq install xvfb > /dev/null
!python3 -m pip install -U -qqq moderngl
!python3 -m pip install -U -qqq moderngl-window
!python3 -m pip install -U -qqq pyvirtualdisplay
from pyvirtualdisplay import Display
display = Display(visible=0, size=(960, 540)).start()
import moderngl
ctx = moderngl.create_standalone_context()
buf = ctx.buffer(b'Hello World!') # allocated on the GPU
buf.read()
b'Hello World!'
It printed as expected, but when I run an example I see the error:
!python3 /content/moderngl/examples/basic_alpha_blending.py --window pyglet
2020-03-28 10:25:48,312 - moderngl_window - INFO - Attempting to load window class: moderngl_window.context.pyglet.Window
Traceback (most recent call last):
File "/content/moderngl/examples/basic_alpha_blending.py", line 74, in <module>
AlphaBlending.run()
File "/content/moderngl/examples/ported/_example.py", line 21, in run
mglw.run_window_config(cls)
File "/usr/local/lib/python3.6/dist-packages/moderngl_window/__init__.py", line 185, in run_window_config
cursor=show_cursor if show_cursor is not None else True,
File "/usr/local/lib/python3.6/dist-packages/moderngl_window/context/pyglet/window.py", line 54, in __init__
config=config,
File "/usr/local/lib/python3.6/dist-packages/pyglet/window/xlib/__init__.py", line 165, in __init__
super(XlibWindow, self).__init__(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/pyglet/window/__init__.py", line 588, in __init__
config = screen.get_best_config(config)
File "/usr/local/lib/python3.6/dist-packages/pyglet/canvas/base.py", line 194, in get_best_config
raise window.NoSuchConfigException()
pyglet.window.NoSuchConfigException
I also tried with another virtual display, but the result is the same:
!python3 -m pip install -U -qqq xvfbwrapper
from xvfbwrapper import Xvfb
display = Xvfb(width=960, height=540).start()
pyglet.window.NoSuchConfigException
In Google Colab you can use the EGL backend with moderngl 5.6.
ctx = moderngl.create_context(standalone=True, backend='egl')
print(ctx.info)
Output (partial):
{
'GL_VENDOR': 'NVIDIA Corporation',
'GL_RENDERER': 'Tesla P100-PCIE-16GB/PCIe/SSE2',
'GL_VERSION': '3.3.0 NVIDIA 418.67',
....
}
moderngl creates an OpenGL 3.3 core context by default. If you need a higher context version you can pass in require=430 for OpenGL 4.3 for example. I don't know what these Tesla cards support.
There is a standard example in moderngl for this. It would be able to create the standard RGB triangle: https://github.com/moderngl/moderngl/blob/master/examples/headless_egl.py
The underlying library creating the contexts is glcontext (https://github.com/moderngl/glcontext).
if you are using the moderngl-window package you have to use the headless.Window because pyglet currently is not able to work in headless mode (It might in the future: https://github.com/pyglet/pyglet/issues/51)
If you run into issue make an issue in the moderngl project: https://github.com/moderngl/moderngl or invade their discord server.

Windows Error using XGBoost with python

So I'm tackling this machine-learning problem (from a previous Kaggle competition for practice: https://www.kaggle.com/c/nyc-taxi-trip-duration) and I'm trying to use XGBoost but getting an error which I have no clue how to tackle. I searched on google and stack overflow but couldn't find anyone with a similar problem.
I'm using python 2.7 with the Spyder IDE through Anaconda and I'm on Windows 10. I did have some trouble installing the xgboost package so I won't completely erase the idea that it could be an installation error. However I'm also doing a Udemy course on ML and I was able to use xgboost just fine with a small dataset and I'm using the same functions.
Code
The code is pretty simple:
... import libraries
# import dataset
dataset = pd.read_csv('data/merged.csv')
y = dataset['trip_duration'].values
del dataset['trip_duration'], dataset["id"], dataset['distance']
X = dataset.values
# Split dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)
# fit XGBoost to training set
classifier = XGBClassifier()
classifier.fit(X_train, y_train)
Output
However it spits out the following error:
In [1]: classifier.fit(X_train, y_train)
Traceback (most recent call last):
File "<ipython-input-44-f44724590846>", line 1, in <module>
classifier.fit(X_train, y_train)
File "C:\Users\MortZ\Anaconda3\lib\site-packages\xgboost\sklearn.py", line 464, in fit
verbose_eval=verbose)
File "C:\Users\MortZ\Anaconda3\lib\site-packages\xgboost\training.py", line 204, in train
xgb_model=xgb_model, callbacks=callbacks)
File "C:\Users\MortZ\Anaconda3\lib\site-packages\xgboost\training.py", line 74, in _train_internal
bst.update(dtrain, i, obj)
File "C:\Users\MortZ\Anaconda3\lib\site-packages\xgboost\core.py", line 819, in update
_check_call(_LIB.XGBoosterUpdateOneIter(self.handle, iteration, dtrain.handle))
WindowsError: [Error -529697949] Windows Error 0xE06D7363
I don't really know how to interpret this so any help would be very appreciated.
Thanks in advance
MortZ
Well after struggling for a few days I managed to find a solution.
A friend of mine told xgboost is known to have problems with python 2.7 so I upgraded it to 3.6 This didn't entirely solve my problem but gave me a knew error:
OSError: [WinError 541541187] Windows Error 0x20474343
After some digging I found a solution to this. The fit function I was trying to use was the source of the problem (although it did work on a different dataset so I'm not entirely sure why..).
Solution
change
classifier = XGBClassifier()
classifier.fit(X_train, y_train)
to
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)
watchlist = [(dtrain, 'train'), (dtest, 'test')]
xgb_pars = {'min_child_weight': 1, 'eta': 0.5, 'colsample_bytree': 0.9,
'max_depth': 6, 'subsample': 0.9, 'lambda': 1., 'nthread': -1, 'booster' : 'gbtree', 'silent': 1, 'eval_metric': 'rmse', 'objective': 'reg:linear'}
model = xgb.train(xgb_pars, dtrain, 10, watchlist, early_stopping_rounds=2, maximize=False, verbose_eval=1)
print('Modeling RMSLE %.5f' % model.best_score)
I guess the error is because you are using XGBClassfier instead of XGBRegressor for a regression problem.

How to load retrained_graph.pb and retrained_label.txt using pycharm editor

Using pete warden tutorials i had trained the inception network and training of which i am getting two files
1.retrained_graph.pb
2.retrained_label.txt
Using this i wanted to classify the flower image.
I had install pycharm and linked all the tensorflow library , i had also test the sample tensorflow code it is working fine.
Now when i run the label_image.py program which is
import tensorflow as tf, sys
image_path = sys.argv[1]
# Read in the image_data
image_data = tf.gfile.FastGFile(image_path, 'rb').read()
# Loads label file, strips off carriage return
label_lines = [line.rstrip() for line
in tf.gfile.GFile("/tf_files/retrained_labels.txt")]
# Unpersists graph from file
with tf.gfile.FastGFile("/tf_files/retrained_graph.pb", 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
with tf.Session() as sess:
# Feed the image_data as input to the graph and get first prediction
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
predictions = sess.run(softmax_tensor, \
{'DecodeJpeg/contents:0': image_data})
# Sort to show labels of first prediction in order of confidence
top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]
for node_id in top_k:
human_string = label_lines[node_id]
score = predictions[0][node_id]
print('%s (score = %.5f)' % (human_string, score))
i am getting this error message
/home/chandan/Tensorflow/bin/python /home/chandan/PycharmProjects/tf/tf_folder/tf_files/label_image.py
Traceback (most recent call last):
File "/home/chandan/PycharmProjects/tf/tf_folder/tf_files/label_image.py", line 7, in <module>
image_path = sys.argv[1]
IndexError: list index out of range
Could any one please help me with this issue.
You are getting this error because it is expecting image name (with path) as an argument.
In pycharm go to View->Tool windows->Terminal.
It is same as opening separate terminal. And run
python label_image.py /image_path/image_name.jpg
You are trying to get the command line argument by calling sys.argv[1]. So you need to give command line arguments to satisfy it. Looks like the argument required is a test image, you should pass its location as a parameter.
Pycharm should have a script parameters and interpreter options dialog which you can use to enter the required parameters.
Or you can call the script from a command line and enter the parameter via;
>python my_python_script.py my_python_parameter.jpg
EDIT:
According to the documents (I don't have pycharm installed on this computer), you should go to Run/Debug configuration menu and edit the configurations for your script. Add the absolute path of your file into Script Parameters box in quotes.
Or alternatively if you just want to skip the parameter thing completely, just get the path as raw_input (input in python3) or just simply give it to image_path = r"absolute_image_path.jpg"

Convert pdf to jpg using python 2.7- an error

I try to find a simple python code that convert hundred of pdf files to jpg files to the same folder where the pdf files located. I use this code from Python Wand converts from PDF to JPG background is incorrect
from wand.image import Image
from wand.color import Color
import os, os.path, sys
def pdf2jpg(source_file, target_file, dest_width, dest_height):
RESOLUTION = 300
ret = True
try:
with Image(filename=source_file, resolution=(RESOLUTION,RESOLUTION)) as img:
img.background_color = Color('white')
img_width = img.width
ratio = dest_width / img_width
img.resize(dest_width, int(ratio * img.height))
img.format = 'jpeg'
img.save(filename = target_file)
except Exception as e:
ret = False
return ret
if __name__ == "__main__":
source_file = r"C:\Users\yaron.KAYAMOT\Desktop\aaa.pdf"
target_file = r"C:\Users\yaron.KAYAMOT\Desktop\aaa.jpg"
ret = pdf2jpg(source_file, target_file, 1895, 1080)
but i get an error:
ImportError: MagickWand shared library not found.
You probably had not installed ImageMagick library.
Try to install:
http://docs.wand-py.org/en/latest/guide/install.html#install-imagemagick-on-windows
>>>
but i do have module MagickWand in the hard disk as shown in the cmd :
UPDATE:
when i try to pip install in the cmd "wand" module i get:
so,i do have this module. When i try to pip install imagemagick \ ImageMagick i get:
You're importing from wand module.
You probably haven't installed bindings for Python.
pip install Wand
See the details here: http://docs.wand-py.org/en/0.4.2/
Also try to do the following:
pip search pythonmagick
or something like that. Try to install all required packages.
This may help you.