I'm trying to make a Seq2Seq Regression example for time-series analysis and I've used the Seq2Seq library as presented at the Dev Summit, which is currently the code on the Tensorflow GitHub branch r1.0.
I have difficulties understanding how the decoder function works for Seq2Seq, specifically for the "cell_output".
I understand that the num_decoder_symbols is the number of classes/words to decode at each time step. I have it working at a point where I can do training. However, I don't get why I can't just substitute the number of features (num_features) instead of num_decoder_symbols. Basically, I want to be able to run the decoder without teacher forcing, in other words pass the output of the previous time step as the input to the next time step.
with ops.name_scope(name, "simple_decoder_fn_inference",
[time, cell_state, cell_input, cell_output,
context_state]):
if cell_input is not None:
raise ValueError("Expected cell_input to be None, but saw: %s" %
cell_input)
if cell_output is None:
# invariant that this is time == 0
next_input_id = array_ops.ones([batch_size,], dtype=dtype) * (
start_of_sequence_id)
done = array_ops.zeros([batch_size,], dtype=dtypes.bool)
cell_state = encoder_state
cell_output = array_ops.zeros([num_decoder_symbols],
dtype=dtypes.float32)
Here is a link to the original code: https://github.com/tensorflow/tensorflow/blob/r1.0/tensorflow/contrib/seq2seq/python/ops/decoder_fn.py
Why don't I need to pass batch_size for the cell output?
cell_output = array_ops.zeros([batch_size, num_decoder_symbols],
dtype=dtypes.float32)
When trying to use this code to create my own regressive Seq2Seq example, where instead of having an output of probabilities/classes, I have a real valued vector of dimension num_features, instead of an array of probability of classes. As I understood, I thought I could replace num_decoder_symbols with num_features, like below:
def decoder_fn(time, cell_state, cell_input, cell_output, context_state):
"""
Again same as in simple_decoder_fn_inference but for regression on sequences with a fixed length
"""
with ops.name_scope(name, "simple_decoder_fn_inference", [time, cell_state, cell_input, cell_output, context_state]):
if cell_input is not None:
raise ValueError("Expected cell_input to be None, but saw: %s" % cell_input)
if cell_output is None:
# invariant that this is time == 0
next_input = array_ops.ones([batch_size, num_features], dtype=dtype)
done = array_ops.zeros([batch_size], dtype=dtypes.bool)
cell_state = encoder_state
cell_output = array_ops.zeros([num_features], dtype=dtypes.float32)
else:
cell_output = output_fn(cell_output)
done = math_ops.equal(0,1) # hardcoded hack just to properly define done
next_input = cell_output
# if time > maxlen, return all true vector
done = control_flow_ops.cond(math_ops.greater(time, maximum_length),
lambda: array_ops.ones([batch_size,], dtype=dtypes.bool),
lambda: done)
return (done, cell_state, next_input, cell_output, context_state)
return decoder_fn
But, I get the following error:
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/seq2seq/python/ops/seq2seq.py", line 212, in dynamic_rnn_decoder
swap_memory=swap_memory, scope=scope)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 1036, in raw_rnn
swap_memory=swap_memory)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2605, in while_loop
result = context.BuildLoop(cond, body, loop_vars, shape_invariants)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2438, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2388, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 980, in body
(next_output, cell_state) = cell(current_input, state)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py", line 327, in __call__
input_size = inputs.get_shape().with_rank(2)[1]
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 635, in with_rank
raise ValueError("Shape %s must have rank %d" % (self, rank))
ValueError: Shape (100,) must have rank 2
As a result, I passed in the batch_size like this in order to get a Shape of rank 2:
cell_output = array_ops.zeros([batch_size, num_features],
dtype=dtypes.float32)
But I get the following error, where Shape is of rank 3 and wants a rank 2 instead:
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/seq2seq/python/ops/seq2seq.py", line 212, in dynamic_rnn_decoder
swap_memory=swap_memory, scope=scope)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 1036, in raw_rnn
swap_memory=swap_memory)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2605, in while_loop
result = context.BuildLoop(cond, body, loop_vars, shape_invariants)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2438, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2388, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 980, in body
(next_output, cell_state) = cell(current_input, state)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py", line 327, in __call__
input_size = inputs.get_shape().with_rank(2)[1]
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 635, in with_rank
raise ValueError("Shape %s must have rank %d" % (self, rank))
ValueError: Shape (10, 10, 100) must have rank 2
Related
I have a bunch of code, Program is written in python2 and used old version of pymc. probably version2.x .
When i run
python run.py
the error i am facing
Init failure
Init failure
Init failure
Init failure
Init failure
Init failure
Init failure
Init failure
No previous MCMC data found.
Traceback (most recent call last):
File "run.py", line 106, in <module>
M=run_MCMC(ms)
File "run.py", line 94, in run_MCMC
mcmc = pm.MCMC(model, db=db, name=name)
File "/home/divyadeep/miniconda3/envs/detrital/lib/python2.7/site-packages/pymc/MCMC.py", line 90, in init
**kwds)
File "/home/divyadeep/miniconda3/envs/detrital/lib/python2.7/site-packages/pymc/Model.py", line 191, in init
Model.init(self, input, name, verbose)
File "/home/divyadeep/miniconda3/envs/detrital/lib/python2.7/site-packages/pymc/Model.py", line 92, in init
ObjectContainer.init(self, input)
File "/home/divyadeep/miniconda3/envs/detrital/lib/python2.7/site-packages/pymc/Container.py", line 605, in init
input_to_file = input.dict
AttributeError: 'NoneType' object has no attribute 'dict'`
I have tried to comment out some of 'init' in the program. but still not able to run.
the run.py is as
def InitExhumation(settings):
"""Initialize piece-wise linear exhumation model"""
#Check that erosion and age break priors are meaningful
if (settings.erate_prior[0] >= settings.erate_prior[1]):
print "\nInvalid range for erate_prior."
sys.exit()
if (settings.abr_prior[0] >= settings.abr_prior[1]):
print "\nInvalid range for abr_prior."
sys.exit()
#Create erosion rate parameters (e1, e2, ...)
e = []
for i in range(1,settings.breaks+2):
e.append(pm.Uniform("e%i" % i, settings.erate_prior[0], settings.erate_prior[1]))
#Create age break parameters (abr1, ...)
abr_i = settings.abr_prior[0]
abr = []
for i in range(1,settings.breaks+1):
abr_i = pm.Uniform("abr%i" % i, abr_i, settings.abr_prior[1])
abr.append(abr_i)
return e, abr
def ExhumationModel(settings):
"""Set up the exhumation model"""
#Check that error rate priors are meaningful
if (settings.error_prior[0] >= settings.error_prior[1]):
print "\nInvalid range for error_prior."
sys.exit()
err = pm.Uniform('RelErr',settings.error_prior[0],settings.error_prior[1])
#Closure elevation priors
hc_parms={'AFT':[3.7, 0.8, 6.0, 2.9], 'AHe':[2.2, 0.5, 3.7, 1.6]}
e, abr = InitExhumation(settings)
nodes = [err, e, abr]
hc = {}
for sample in settings.samples:
parms = e[:]
h_mu = np.mean(sample.catchment.z)
if sample.tc_type not in hc.keys():
hc[sample.tc_type] = pm.TruncatedNormal("hc_%s"%sample.tc_type, h_mu-hc_parms[sample.tc_type][0],
1/hc_parms[sample.tc_type][1]**2,
h_mu-hc_parms[sample.tc_type][2],
h_mu-hc_parms[sample.tc_type][3])
nodes.append(hc[sample.tc_type])
parms.append(hc[sample.tc_type])
parms.extend(abr)
if isinstance(sample, DetritalSample):
idx_i = pm.Categorical("Index_" + sample.sample_name, p = sample.catchment.bins['w'], size=len(sample.dt_ages))
nodes.extend([idx_i])
exp_i = pm.Lambda("ExpAge_" + sample.sample_name, lambda parm=parms, idx=idx_i: ba.h2a(sample.catchment.bins['h'][idx],parm))
value = sample.dt_ages
else:
idx_i = None
exp_i = pm.Lambda("ExpAge_" + sample.sample_name, lambda parm=parms: ba.h2a(sample.br_elevation,parm), plot=False)
value = sample.br_ages
obs_i = pm.Normal("ObsAge_" + sample.sample_name, mu = exp_i, tau = 1./(err*exp_i)**2, value = value, observed=True)
sim_i = pm.Lambda("SimAge_" + sample.sample_name, lambda ta=exp_i, err=err: pm.rnormal(mu = ta, tau = 1./(err*ta)**2))
nodes.extend([exp_i, obs_i, sim_i])
return nodes
def run_MCMC(settings):
"""Run MCMC algorithm"""
burn = settings.iterations/2
thin = (settings.iterations-burn) / settings.finalChainSize
name = "%s" % settings.model_name + "_%ibrk" % settings.breaks
attempt = 0
model=None
while attempt<5000:
try:
model = ExhumationModel(settings)
break
except pm.ZeroProbability, ValueError:
attempt+=1
#print "Init failure %i" % attemp
print "Init failure "
try:
#The following creates text files for the chains rather than hdf5
db = pm.database.txt.load(name + '.txt')
#db = pm.database.hdf5.load(name + '.hdf5')
print "\nExisting MCMC data loaded.\n"
except AttributeError:
print "\nNo previous MCMC data found.\n"
db='txt'
mcmc = pm.MCMC(model, db=db, name=name)
#mcmc.use_step_method(pm.AdaptiveMetropolis, M.parm)
if settings.iterations > 1:
mcmc.sample(settings.iterations,burn=burn,thin=thin)
return mcmc
if __name__ == '__main__':
sys.path[0:0] = './' # Puts current directory at the start of path
import model_setup as ms
if len(sys.argv)>1: ms.iterations = int(sys.argv[1])
M=run_MCMC(ms)
#import pdb; pdb.set_trace()
#Output and diagnostics
try:
ba.statistics(M, ms.samples)
except TypeError:
print "\nCannot compute stats without resampling (PyMC bug?).\n"
ps.chains(M, ms.finalChainSize, ms.iterations, ms.samples, ms.output_format)
ps.summary(M, ms.samples, ms.output_format)
ps.ks_gof(M, ms.samples, ms.output_format)
ps.histograms(ms.samples, ms.show_histogram, ms.output_format)
ps.discrepancy(M, ms.samples, ms.output_format)
## ps.unorthodox_ks(M, ms.output_format)
## try:
## ps.catchment(M.catchment_dem, format=ms.output_format)
## except KeyError:
## print "\nUnable to generate catchment plot."
M.db.close()
`
I am trying to load batches of images each of different size(to be specific they are from pascal voc dataset). source_images.npy file contains images of different heights, widths but same channels. What i am doing wrong? Are there any other methods to send images of different sizes?
This is my Code:
def feed(images, im, epochs=None):
epochs_elapsed = 0
while epochs is None or epochs_elapsed < epochs:
for i in range(len(images)):
yield {im: images[i]}
epochs_elapsed += 1
def tf_ops(images, capacity=200):
im = tf.placeholder(tf.float32)
queue = tf.FIFOQueue(capacity, [tf.float32])
enqueue_op = queue.enqueue(im)
fqr = FeedingQueueRunner(queue, [enqueue_op],
feed_fns=[feed(images,im).next()])
tf.train.add_queue_runner(fqr)
return queue.dequeue()
source_images = np.load('source_images.npy')
source_images=source_images.tolist()
source_im= tf_ops(source_images)
source_im_batch = tf.train.batch([source_im],batch_size=128,capacity=200, dynamic_pad=True)
Error:
source_im_batch = tf.train.batch([source_im], batch_size=128,capacity=200, dynamic_pad=True)
File "/home/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 872, in batch
name=name)
File "/home/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 655, in _batch
shapes = _shapes([tensor_list], shapes, enqueue_many)
File "/home/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 598, in _shapes
raise ValueError("Cannot infer Tensor's rank: %s" % tl[i])
ValueError: Cannot infer Tensor's rank: Tensor("fifo_queue_Dequeue:0", dtype=float32)
I am having the hardest time figuring out why i am getting this error. I have searched a lot but unable to fine any solution
import numpy as np
import warnings
from collections import Counter
import pandas as pd
def k_nearest_neighbors(data, predict, k=3):
if len(data) >= k:
warnings.warn('K is set to a value less than total voting groups!')
distances = []
for group in data:
for features in data[group]:
euclidean_distance = np.linalg.norm(np.array(features)-
np.array(predict))
distances.append([euclidean_distance,group])
votes = [i[1] for i in sorted(distances)[:k]]
vote_result = Counter(votes).most_common(1)[0][0]
return vote_result
df = pd.read_csv("data.txt")
df.replace('?',-99999, inplace=True)
df.drop(['id'], 1, inplace=True)
full_data = df.astype(float).values.tolist()
print(full_data)
After running. it gives error
Traceback (most recent call last):
File "E:\Jazab\Machine Learning\Lec18(Testing K Neatest Nerighbors
Classifier)\Lec18(Testing K Neatest Nerighbors
Classifier)\Lec18_Testing_K_Neatest_Nerighbors_Classifier_.py", line 25, in
<module>
full_data = df.astype(float).values.tolist()
File "C:\Python27\lib\site-packages\pandas\util\_decorators.py", line 91, in
wrapper
return func(*args, **kwargs)
File "C:\Python27\lib\site-packages\pandas\core\generic.py", line 3299, in
astype
**kwargs)
File "C:\Python27\lib\site-packages\pandas\core\internals.py", line 3224, in
astype
return self.apply('astype', dtype=dtype, **kwargs)
File "C:\Python27\lib\site-packages\pandas\core\internals.py", line 3091, in
apply
applied = getattr(b, f)(**kwargs)
File "C:\Python27\lib\site-packages\pandas\core\internals.py", line 471, in
astype
**kwargs)
File "C:\Python27\lib\site-packages\pandas\core\internals.py", line 521, in
_astype
values = astype_nansafe(values.ravel(), dtype, copy=True)
File "C:\Python27\lib\site-packages\pandas\core\dtypes\cast.py", line 636,
in astype_nansafe
return arr.astype(dtype)
ValueError: invalid literal for float(): 3) <-----Reappears in Group 8 as:
Press any key to continue . . .
if i remove astype(float) program run fine
What should i need to do ?
There are bad data (3)), so need to_numeric with apply because need processes all columns.
Non numeric are converted to NaNs, which are replaced by fillna to some scalar, e.g. 0:
full_data = df.apply(pd.to_numeric, errors='coerce').fillna(0).values.tolist()
Sample:
df = pd.DataFrame({'A':[1,2,7], 'B':['3)',4,5]})
print (df)
A B
0 1 3)
1 2 4
2 7 5
full_data = df.apply(pd.to_numeric, errors='coerce').fillna(0).values.tolist()
print (full_data)
[[1.0, 0.0], [2.0, 4.0], [7.0, 5.0]]
It looks like you have 3) as an entry in your CSV file, and Pandas is complaining because it can't cast it to a float because of the ).
I am currently trying to implement a RNN for regression.
I need to create a neural network capable of converting audio samples into vector of mfcc feature. I've already know what the feature for each audio samples is, so the task it self is to create a neural network that is capable of converting a list of audio samples in to the desired MFCC feature.
The second problem I am facing is that since the audio files I am sampling has different length, will the list with the audio sample also have different length, which would cause problem with the number of input I need to feed into to the neural network. I found this post on how to handle variable sequence length, and tried to incorporate into my implementation of a RNN, but seem to not be able to get a lot of errors for unexplainable reasons..
Could anyone see what is going wrong with my implementation?
Here is the code:
def length(sequence): ##Zero padding to fit the max lenght... Question whether that is a good idea.
used = tf.sign(tf.reduce_max(tf.abs(sequence), reduction_indices=2))
length = tf.reduce_sum(used, reduction_indices=1)
length = tf.cast(length, tf.int32)
return length
def cost(output, target):
# Compute cross entropy for each frame.
cross_entropy = target * tf.log(output)
cross_entropy = -tf.reduce_sum(cross_entropy, reduction_indices=2)
mask = tf.sign(tf.reduce_max(tf.abs(target), reduction_indices=2))
cross_entropy *= mask
# Average over actual sequence lengths.
cross_entropy = tf.reduce_sum(cross_entropy, reduction_indices=1)
cross_entropy /= tf.reduce_sum(mask, reduction_indices=1)
return tf.reduce_mean(cross_entropy)
def last_relevant(output):
max_length = int(output.get_shape()[1])
relevant = tf.reduce_sum(tf.mul(output, tf.expand_dims(tf.one_hot(length, max_length), -1)), 1)
return relevant
files_train_path = [dnn_train+f for f in listdir(dnn_train) if isfile(join(dnn_train, f))]
files_test_path = [dnn_test+f for f in listdir(dnn_test) if isfile(join(dnn_test, f))]
files_train_name = [f for f in listdir(dnn_train) if isfile(join(dnn_train, f))]
files_test_name = [f for f in listdir(dnn_test) if isfile(join(dnn_test, f))]
os.chdir(dnn_train)
train_name,train_data = generate_list_of_names_data(files_train_path)
train_data, train_names, train_output_data, train_class_output = load_sound_files(files_train_path,train_name,train_data)
max_length = 0 ## Used for variable sequence input
for element in train_data:
if element.size > max_length:
max_length = element.size
NUM_EXAMPLES = len(train_data)/2
test_data = train_data[NUM_EXAMPLES:]
test_output = train_output_data[NUM_EXAMPLES:]
train_data = train_data[:NUM_EXAMPLES]
train_output = train_output_data[:NUM_EXAMPLES]
print("--- %s seconds ---" % (time.time() - start_time))
#----------------------------------------------------------------------#
#----------------------------Main--------------------------------------#
### Tensorflow neural network setup
batch_size = None
sequence_length_max = max_length
input_dimension=1
data = tf.placeholder(tf.float32,[batch_size,sequence_length_max,input_dimension])
target = tf.placeholder(tf.float32,[None,14])
num_hidden = 24 ## Hidden layer
cell = tf.nn.rnn_cell.LSTMCell(num_hidden,state_is_tuple=True) ## Long short term memory
output, state = tf.nn.dynamic_rnn(cell, data, dtype=tf.float32,sequence_length = length(data)) ## Creates the Rnn skeleton
last = last_relevant(output)#tf.gather(val, int(val.get_shape()[0]) - 1) ## Appedning as last
weight = tf.Variable(tf.truncated_normal([num_hidden, int(target.get_shape()[1])]))
bias = tf.Variable(tf.constant(0.1, shape=[target.get_shape()[1]]))
prediction = tf.nn.softmax(tf.matmul(last, weight) + bias)
cross_entropy = cost(output,target)# How far am I from correct value?
optimizer = tf.train.AdamOptimizer() ## TensorflowOptimizer
minimize = optimizer.minimize(cross_entropy)
mistakes = tf.not_equal(tf.argmax(target, 1), tf.argmax(prediction, 1))
error = tf.reduce_mean(tf.cast(mistakes, tf.float32))
## Training ##
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
batch_size = 1000
no_of_batches = int(len(train_data)/batch_size)
epoch = 5000
for i in range(epoch):
ptr = 0
for j in range(no_of_batches):
inp, out = train_data[ptr:ptr+batch_size], train_output[ptr:ptr+batch_size]
ptr+=batch_size
sess.run(minimize,{data: inp, target: out})
print "Epoch - ",str(i)
incorrect = sess.run(error,{data: test_data, target: test_output})
print('Epoch {:2d} error {:3.1f}%'.format(i + 1, 100 * incorrect))
sess.close()
Error message:
Traceback (most recent call last):
File "tensorflow_test.py", line 177, in <module>
last = last_relevant(output)#tf.gather(val, int(val.get_shape()[0]) - 1) ## Appedning as last
File "tensorflow_test.py", line 132, in last_relevant
relevant = tf.reduce_sum(tf.mul(output, tf.expand_dims(tf.one_hot(length, max_length), -1)), 1)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 2778, in one_hot
name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 1413, in _one_hot
axis=axis, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 454, in apply_op
as_ref=input_arg.is_ref)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 621, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 180, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 163, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 421, in make_tensor_proto
tensor_proto.string_val.extend([compat.as_bytes(x) for x in proto_values])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/compat.py", line 45, in as_bytes
(bytes_or_text,))
TypeError: Expected binary or unicode string, got <function length at 0x7f51a7a3ede8>
Edit:
Changing the tf.one_hot(lenght(output),max_length) gives me this error message:
Traceback (most recent call last):
File "tensorflow_test.py", line 184, in <module>
cross_entropy = cost(output,target)# How far am I from correct value?
File "tensorflow_test.py", line 121, in cost
cross_entropy = target * tf.log(output)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 754, in binary_op_wrapper
return func(x, y, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 903, in _mul_dispatch
return gen_math_ops.mul(x, y, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 1427, in mul
result = _op_def_lib.apply_op("Mul", x=x, y=y, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 703, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2312, in create_op
set_shapes_for_outputs(ret)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1704, in set_shapes_for_outputs
shapes = shape_func(op)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 1801, in _BroadcastShape
% (shape_x, shape_y))
ValueError: Incompatible shapes for broadcasting: (?, 14) and (?, 138915, 24)
tf.one_hot(length, ...)
here length is a function, not a tensor. Try length(something) instead.
Hi I am trying to run a conv. neural network addapted from MINST2 tutorial in tensorflow.
I am having the following error, but i am not sure what is going on:
W tensorflow/core/framework/op_kernel.cc:909] Invalid argument: Shape mismatch in tuple component 0. Expected [784], got [6272]
W tensorflow/core/framework/op_kernel.cc:909] Invalid argument: Shape mismatch in tuple component 0. Expected [784], got [6272]
Traceback (most recent call last):
File "4_Treino_Rede_Neural.py", line 161, in <module>
train_accuracy = accuracy.eval(feed_dict={keep_prob: 1.0})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 555, in eval
return _eval_using_default_session(self, feed_dict, self.graph, session)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3498, in _eval_using_default_session
return session.run(tensors, feed_dict)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 372, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 636, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 708, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 728, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.OutOfRangeError: RandomShuffleQueue '_0_input/shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
[[Node: input/shuffle_batch = QueueDequeueMany[_class=["loc:#input/shuffle_batch/random_shuffle_queue"], component_types=[DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input/shuffle_batch/random_shuffle_queue, input/shuffle_batch/n)]]
Caused by op u'input/shuffle_batch', defined at:
File "4_Treino_Rede_Neural.py", line 113, in <module>
x, y_ = inputs(train=True, batch_size=FLAGS.batch_size, num_epochs=FLAGS.num_epochs)
File "4_Treino_Rede_Neural.py", line 93, in inputs
min_after_dequeue=1000)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/input.py", line 779, in shuffle_batch
dequeued = queue.dequeue_many(batch_size, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/data_flow_ops.py", line 400, in dequeue_many
self._queue_ref, n=n, component_types=self._dtypes, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 465, in _queue_dequeue_many
timeout_ms=timeout_ms, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 704, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2260, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1230, in __init__
self._traceback = _extract_stack()
My program is:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os.path
import time
import numpy as np
import tensorflow as tf
# Basic model parameters as external flags.
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_integer('num_epochs', 2, 'Number of epochs to run trainer.')
flags.DEFINE_integer('batch_size', 100, 'Batch size.')
flags.DEFINE_string('train_dir', '/root/data', 'Directory with the training data.')
#flags.DEFINE_string('train_dir', '/root/data2', 'Directory with the training data.')
# Constants used for dealing with the files, matches convert_to_records.
TRAIN_FILE = 'train.tfrecords'
VALIDATION_FILE = 'validation.tfrecords'
# Set-up dos pacotes
sess = tf.InteractiveSession()
def read_and_decode(filename_queue):
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(
serialized_example,
# Defaults are not specified since both keys are required.
features={
'image_raw': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([], tf.int64),
})
# Convert from a scalar string tensor (whose single string has
# length mnist.IMAGE_PIXELS) to a uint8 tensor with shape
# [mnist.IMAGE_PIXELS].
image = tf.decode_raw(features['image_raw'], tf.uint8)
image.set_shape([784])
# OPTIONAL: Could reshape into a 28x28 image and apply distortions
# here. Since we are not applying any distortions in this
# example, and the next step expects the image to be flattened
# into a vector, we don't bother.
# Convert from [0, 255] -> [-0.5, 0.5] floats.
image = tf.cast(image, tf.float32) * (1. / 255) - 0.5
# Convert label from a scalar uint8 tensor to an int32 scalar.
label = tf.cast(features['label'], tf.int32)
return image, label
def inputs(train, batch_size, num_epochs):
"""Reads input data num_epochs times.
Args:
train: Selects between the training (True) and validation (False) data.
batch_size: Number of examples per returned batch.
num_epochs: Number of times to read the input data, or 0/None to
train forever.
Returns:
A tuple (images, labels), where:
* images is a float tensor with shape [batch_size, 30,26,1]
in the range [-0.5, 0.5].
* labels is an int32 tensor with shape [batch_size] with the true label,
a number in the range [0, char letras).
Note that an tf.train.QueueRunner is added to the graph, which
must be run using e.g. tf.train.start_queue_runners().
"""
if not num_epochs: num_epochs = None
filename = os.path.join(FLAGS.train_dir,
TRAIN_FILE if train else VALIDATION_FILE)
with tf.name_scope('input'):
filename_queue = tf.train.string_input_producer(
[filename], num_epochs=num_epochs)
# Even when reading in multiple threads, share the filename
# queue.
image, label = read_and_decode(filename_queue)
# Shuffle the examples and collect them into batch_size batches.
# (Internally uses a RandomShuffleQueue.)
# We run this in two threads to avoid being a bottleneck.
images, sparse_labels = tf.train.shuffle_batch(
[image, label], batch_size=batch_size, num_threads=2,
capacity=1000 + 3 * batch_size,
# Ensures a minimum amount of shuffling of examples.
min_after_dequeue=1000)
return images, sparse_labels
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
#Variaveis
x, y_ = inputs(train=True, batch_size=FLAGS.batch_size, num_epochs=FLAGS.num_epochs)
#onehot_y_ = tf.one_hot(y_, 36, dtype=tf.float32)
#y_ = tf.string_to_number(y_, out_type=tf.int32)
#Layer 1
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1,28,28,1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
#Layer 2
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
#Densely Connected Layer
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
#Dropout - reduz overfitting
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
#Readout layer
W_fc2 = weight_variable([1024, 36])
b_fc2 = bias_variable([36])
#y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
#Train and evaluate
#cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv), reduction_indices=[1]))
#cross_entropy = tf.reduce_mean(-tf.reduce_sum(onehot_y_ * tf.log(y_conv), reduction_indices=[1]))
cross_entropy = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(y_conv, y_))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.run(tf.initialize_all_variables())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
for i in range(20000):
if i%100 == 0:
train_accuracy = accuracy.eval(feed_dict={keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={keep_prob: 0.5})
x, y_ = inputs(train=True, batch_size=2000)
#y_ = tf.string_to_number(y_, out_type=tf.int32)
print("test accuracy %g"%accuracy.eval(feed_dict={keep_prob: 1.0}))
coord.join(threads)
sess.close()
Can anyone explain me whats going on? And how to fix it?
Thanks!
Marcelo V
I had similar problems in the past, and it was due to that I was storing and reading the data in incorrect data types. For example, I had casted the data first as type float when converting original png data to tfrecords. Then when I read the data out from tfrecords, I once again casted it as float (assuming the data coming out was uint8. Hence I had mismatch of 3136 (784*4) when expected 784. I'm guessing that may also be the case for you here.
In the line:
filename_queue = tf.train.string_input_producer([filename], num_epochs=num_epochs)
You specify the number of epochs the queue will run through the filenames. The documentation explains it well:
num_epochs: An integer (optional). If specified, string_input_producer produces each string from num_epochs times before generating an OutOfRange error. If not specified, string_input_producer can cycle through the strings in string_tensor an unlimited number of times.
In flags.DEFINE_integer('num_epochs', 2, 'Number of epochs to run trainer.'), you specify a default number of epochs 2. You should either increase that, or remove the num_epochs argument in string_input_producer.