Invalid literal for float in k nearest neighbor - python-2.7

I am having the hardest time figuring out why i am getting this error. I have searched a lot but unable to fine any solution
import numpy as np
import warnings
from collections import Counter
import pandas as pd
def k_nearest_neighbors(data, predict, k=3):
if len(data) >= k:
warnings.warn('K is set to a value less than total voting groups!')
distances = []
for group in data:
for features in data[group]:
euclidean_distance = np.linalg.norm(np.array(features)-
np.array(predict))
distances.append([euclidean_distance,group])
votes = [i[1] for i in sorted(distances)[:k]]
vote_result = Counter(votes).most_common(1)[0][0]
return vote_result
df = pd.read_csv("data.txt")
df.replace('?',-99999, inplace=True)
df.drop(['id'], 1, inplace=True)
full_data = df.astype(float).values.tolist()
print(full_data)
After running. it gives error
Traceback (most recent call last):
File "E:\Jazab\Machine Learning\Lec18(Testing K Neatest Nerighbors
Classifier)\Lec18(Testing K Neatest Nerighbors
Classifier)\Lec18_Testing_K_Neatest_Nerighbors_Classifier_.py", line 25, in
<module>
full_data = df.astype(float).values.tolist()
File "C:\Python27\lib\site-packages\pandas\util\_decorators.py", line 91, in
wrapper
return func(*args, **kwargs)
File "C:\Python27\lib\site-packages\pandas\core\generic.py", line 3299, in
astype
**kwargs)
File "C:\Python27\lib\site-packages\pandas\core\internals.py", line 3224, in
astype
return self.apply('astype', dtype=dtype, **kwargs)
File "C:\Python27\lib\site-packages\pandas\core\internals.py", line 3091, in
apply
applied = getattr(b, f)(**kwargs)
File "C:\Python27\lib\site-packages\pandas\core\internals.py", line 471, in
astype
**kwargs)
File "C:\Python27\lib\site-packages\pandas\core\internals.py", line 521, in
_astype
values = astype_nansafe(values.ravel(), dtype, copy=True)
File "C:\Python27\lib\site-packages\pandas\core\dtypes\cast.py", line 636,
in astype_nansafe
return arr.astype(dtype)
ValueError: invalid literal for float(): 3) <-----Reappears in Group 8 as:
Press any key to continue . . .
if i remove astype(float) program run fine
What should i need to do ?

There are bad data (3)), so need to_numeric with apply because need processes all columns.
Non numeric are converted to NaNs, which are replaced by fillna to some scalar, e.g. 0:
full_data = df.apply(pd.to_numeric, errors='coerce').fillna(0).values.tolist()
Sample:
df = pd.DataFrame({'A':[1,2,7], 'B':['3)',4,5]})
print (df)
A B
0 1 3)
1 2 4
2 7 5
full_data = df.apply(pd.to_numeric, errors='coerce').fillna(0).values.tolist()
print (full_data)
[[1.0, 0.0], [2.0, 4.0], [7.0, 5.0]]

It looks like you have 3) as an entry in your CSV file, and Pandas is complaining because it can't cast it to a float because of the ).

Related

Appending list but got "AttributeError: 'NoneType' object has no attribute 'append'" Python 2.7

I keep having this error on my code, which says:
"AttributeError: 'NoneType' object has no attribute 'append'"
But there is interesting fact, it only occurs when "n" is odd.
Here is the piece of code I'm using:
Note: I'm using Python 2.7.18
def sol(n, df):
if n == 1:
result = df
elif n == 2 or n == 0:
df.append(1)
result = df
elif n % 2 == 0:
df.append(n/2)
df = sol(n/2, df)
else:
df_2 = df[:]
df_2 = df_2.append(n-1)
n_2 = n-1
df_2 = sol(n_2, df_2)
return df
df = []
n = input('n == ')
sol(n, df)
The error is as follow:
n == 3
Traceback (most recent call last):
File "Challenge_3_1_vTest.py", line 27, in <module>
solution(n)
File "Challenge_3_1_vTest.py", line 6, in solution
print((sol(n, df)))
File "Challenge_3_1_vTest.py", line 21, in sol
df_2 = sol(n_2, df_2)
File "Challenge_3_1_vTest.py", line 12, in sol
df.append(1)
AttributeError: 'NoneType' object has no attribute 'append'
The append function doesn't return a thing, that's why I was having the error and was passing a none when using number that are not power of 2.
df_2 = df_2.append(n-1)
This line of code was the source of the problem, I should have done:
df_2.append(n-1)

Tensorflow 1.0 Seq2Seq Decoder function

I'm trying to make a Seq2Seq Regression example for time-series analysis and I've used the Seq2Seq library as presented at the Dev Summit, which is currently the code on the Tensorflow GitHub branch r1.0.
I have difficulties understanding how the decoder function works for Seq2Seq, specifically for the "cell_output".
I understand that the num_decoder_symbols is the number of classes/words to decode at each time step. I have it working at a point where I can do training. However, I don't get why I can't just substitute the number of features (num_features) instead of num_decoder_symbols. Basically, I want to be able to run the decoder without teacher forcing, in other words pass the output of the previous time step as the input to the next time step.
with ops.name_scope(name, "simple_decoder_fn_inference",
[time, cell_state, cell_input, cell_output,
context_state]):
if cell_input is not None:
raise ValueError("Expected cell_input to be None, but saw: %s" %
cell_input)
if cell_output is None:
# invariant that this is time == 0
next_input_id = array_ops.ones([batch_size,], dtype=dtype) * (
start_of_sequence_id)
done = array_ops.zeros([batch_size,], dtype=dtypes.bool)
cell_state = encoder_state
cell_output = array_ops.zeros([num_decoder_symbols],
dtype=dtypes.float32)
Here is a link to the original code: https://github.com/tensorflow/tensorflow/blob/r1.0/tensorflow/contrib/seq2seq/python/ops/decoder_fn.py
Why don't I need to pass batch_size for the cell output?
cell_output = array_ops.zeros([batch_size, num_decoder_symbols],
dtype=dtypes.float32)
When trying to use this code to create my own regressive Seq2Seq example, where instead of having an output of probabilities/classes, I have a real valued vector of dimension num_features, instead of an array of probability of classes. As I understood, I thought I could replace num_decoder_symbols with num_features, like below:
def decoder_fn(time, cell_state, cell_input, cell_output, context_state):
"""
Again same as in simple_decoder_fn_inference but for regression on sequences with a fixed length
"""
with ops.name_scope(name, "simple_decoder_fn_inference", [time, cell_state, cell_input, cell_output, context_state]):
if cell_input is not None:
raise ValueError("Expected cell_input to be None, but saw: %s" % cell_input)
if cell_output is None:
# invariant that this is time == 0
next_input = array_ops.ones([batch_size, num_features], dtype=dtype)
done = array_ops.zeros([batch_size], dtype=dtypes.bool)
cell_state = encoder_state
cell_output = array_ops.zeros([num_features], dtype=dtypes.float32)
else:
cell_output = output_fn(cell_output)
done = math_ops.equal(0,1) # hardcoded hack just to properly define done
next_input = cell_output
# if time > maxlen, return all true vector
done = control_flow_ops.cond(math_ops.greater(time, maximum_length),
lambda: array_ops.ones([batch_size,], dtype=dtypes.bool),
lambda: done)
return (done, cell_state, next_input, cell_output, context_state)
return decoder_fn
But, I get the following error:
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/seq2seq/python/ops/seq2seq.py", line 212, in dynamic_rnn_decoder
swap_memory=swap_memory, scope=scope)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 1036, in raw_rnn
swap_memory=swap_memory)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2605, in while_loop
result = context.BuildLoop(cond, body, loop_vars, shape_invariants)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2438, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2388, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 980, in body
(next_output, cell_state) = cell(current_input, state)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py", line 327, in __call__
input_size = inputs.get_shape().with_rank(2)[1]
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 635, in with_rank
raise ValueError("Shape %s must have rank %d" % (self, rank))
ValueError: Shape (100,) must have rank 2
As a result, I passed in the batch_size like this in order to get a Shape of rank 2:
cell_output = array_ops.zeros([batch_size, num_features],
dtype=dtypes.float32)
But I get the following error, where Shape is of rank 3 and wants a rank 2 instead:
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/seq2seq/python/ops/seq2seq.py", line 212, in dynamic_rnn_decoder
swap_memory=swap_memory, scope=scope)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 1036, in raw_rnn
swap_memory=swap_memory)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2605, in while_loop
result = context.BuildLoop(cond, body, loop_vars, shape_invariants)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2438, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2388, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 980, in body
(next_output, cell_state) = cell(current_input, state)
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py", line 327, in __call__
input_size = inputs.get_shape().with_rank(2)[1]
File "/opt/DL/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 635, in with_rank
raise ValueError("Shape %s must have rank %d" % (self, rank))
ValueError: Shape (10, 10, 100) must have rank 2

tensorflow.python.framework.errors.OutOfRangeError:

Hi I am trying to run a conv. neural network addapted from MINST2 tutorial in tensorflow.
I am having the following error, but i am not sure what is going on:
W tensorflow/core/framework/op_kernel.cc:909] Invalid argument: Shape mismatch in tuple component 0. Expected [784], got [6272]
W tensorflow/core/framework/op_kernel.cc:909] Invalid argument: Shape mismatch in tuple component 0. Expected [784], got [6272]
Traceback (most recent call last):
File "4_Treino_Rede_Neural.py", line 161, in <module>
train_accuracy = accuracy.eval(feed_dict={keep_prob: 1.0})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 555, in eval
return _eval_using_default_session(self, feed_dict, self.graph, session)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3498, in _eval_using_default_session
return session.run(tensors, feed_dict)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 372, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 636, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 708, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 728, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.OutOfRangeError: RandomShuffleQueue '_0_input/shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
[[Node: input/shuffle_batch = QueueDequeueMany[_class=["loc:#input/shuffle_batch/random_shuffle_queue"], component_types=[DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input/shuffle_batch/random_shuffle_queue, input/shuffle_batch/n)]]
Caused by op u'input/shuffle_batch', defined at:
File "4_Treino_Rede_Neural.py", line 113, in <module>
x, y_ = inputs(train=True, batch_size=FLAGS.batch_size, num_epochs=FLAGS.num_epochs)
File "4_Treino_Rede_Neural.py", line 93, in inputs
min_after_dequeue=1000)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/input.py", line 779, in shuffle_batch
dequeued = queue.dequeue_many(batch_size, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/data_flow_ops.py", line 400, in dequeue_many
self._queue_ref, n=n, component_types=self._dtypes, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 465, in _queue_dequeue_many
timeout_ms=timeout_ms, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 704, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2260, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1230, in __init__
self._traceback = _extract_stack()
My program is:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os.path
import time
import numpy as np
import tensorflow as tf
# Basic model parameters as external flags.
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_integer('num_epochs', 2, 'Number of epochs to run trainer.')
flags.DEFINE_integer('batch_size', 100, 'Batch size.')
flags.DEFINE_string('train_dir', '/root/data', 'Directory with the training data.')
#flags.DEFINE_string('train_dir', '/root/data2', 'Directory with the training data.')
# Constants used for dealing with the files, matches convert_to_records.
TRAIN_FILE = 'train.tfrecords'
VALIDATION_FILE = 'validation.tfrecords'
# Set-up dos pacotes
sess = tf.InteractiveSession()
def read_and_decode(filename_queue):
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(
serialized_example,
# Defaults are not specified since both keys are required.
features={
'image_raw': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([], tf.int64),
})
# Convert from a scalar string tensor (whose single string has
# length mnist.IMAGE_PIXELS) to a uint8 tensor with shape
# [mnist.IMAGE_PIXELS].
image = tf.decode_raw(features['image_raw'], tf.uint8)
image.set_shape([784])
# OPTIONAL: Could reshape into a 28x28 image and apply distortions
# here. Since we are not applying any distortions in this
# example, and the next step expects the image to be flattened
# into a vector, we don't bother.
# Convert from [0, 255] -> [-0.5, 0.5] floats.
image = tf.cast(image, tf.float32) * (1. / 255) - 0.5
# Convert label from a scalar uint8 tensor to an int32 scalar.
label = tf.cast(features['label'], tf.int32)
return image, label
def inputs(train, batch_size, num_epochs):
"""Reads input data num_epochs times.
Args:
train: Selects between the training (True) and validation (False) data.
batch_size: Number of examples per returned batch.
num_epochs: Number of times to read the input data, or 0/None to
train forever.
Returns:
A tuple (images, labels), where:
* images is a float tensor with shape [batch_size, 30,26,1]
in the range [-0.5, 0.5].
* labels is an int32 tensor with shape [batch_size] with the true label,
a number in the range [0, char letras).
Note that an tf.train.QueueRunner is added to the graph, which
must be run using e.g. tf.train.start_queue_runners().
"""
if not num_epochs: num_epochs = None
filename = os.path.join(FLAGS.train_dir,
TRAIN_FILE if train else VALIDATION_FILE)
with tf.name_scope('input'):
filename_queue = tf.train.string_input_producer(
[filename], num_epochs=num_epochs)
# Even when reading in multiple threads, share the filename
# queue.
image, label = read_and_decode(filename_queue)
# Shuffle the examples and collect them into batch_size batches.
# (Internally uses a RandomShuffleQueue.)
# We run this in two threads to avoid being a bottleneck.
images, sparse_labels = tf.train.shuffle_batch(
[image, label], batch_size=batch_size, num_threads=2,
capacity=1000 + 3 * batch_size,
# Ensures a minimum amount of shuffling of examples.
min_after_dequeue=1000)
return images, sparse_labels
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
#Variaveis
x, y_ = inputs(train=True, batch_size=FLAGS.batch_size, num_epochs=FLAGS.num_epochs)
#onehot_y_ = tf.one_hot(y_, 36, dtype=tf.float32)
#y_ = tf.string_to_number(y_, out_type=tf.int32)
#Layer 1
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1,28,28,1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
#Layer 2
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
#Densely Connected Layer
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
#Dropout - reduz overfitting
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
#Readout layer
W_fc2 = weight_variable([1024, 36])
b_fc2 = bias_variable([36])
#y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
#Train and evaluate
#cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv), reduction_indices=[1]))
#cross_entropy = tf.reduce_mean(-tf.reduce_sum(onehot_y_ * tf.log(y_conv), reduction_indices=[1]))
cross_entropy = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(y_conv, y_))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.run(tf.initialize_all_variables())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
for i in range(20000):
if i%100 == 0:
train_accuracy = accuracy.eval(feed_dict={keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={keep_prob: 0.5})
x, y_ = inputs(train=True, batch_size=2000)
#y_ = tf.string_to_number(y_, out_type=tf.int32)
print("test accuracy %g"%accuracy.eval(feed_dict={keep_prob: 1.0}))
coord.join(threads)
sess.close()
Can anyone explain me whats going on? And how to fix it?
Thanks!
Marcelo V
I had similar problems in the past, and it was due to that I was storing and reading the data in incorrect data types. For example, I had casted the data first as type float when converting original png data to tfrecords. Then when I read the data out from tfrecords, I once again casted it as float (assuming the data coming out was uint8. Hence I had mismatch of 3136 (784*4) when expected 784. I'm guessing that may also be the case for you here.
In the line:
filename_queue = tf.train.string_input_producer([filename], num_epochs=num_epochs)
You specify the number of epochs the queue will run through the filenames. The documentation explains it well:
num_epochs: An integer (optional). If specified, string_input_producer produces each string from num_epochs times before generating an OutOfRange error. If not specified, string_input_producer can cycle through the strings in string_tensor an unlimited number of times.
In flags.DEFINE_integer('num_epochs', 2, 'Number of epochs to run trainer.'), you specify a default number of epochs 2. You should either increase that, or remove the num_epochs argument in string_input_producer.

Patsy's dmatrices cannot read my formula

I have a function LogReg, which is as follows: (using justmarkham's code as inspiration)
def LogReg(self):
formulA = "class ~"
print self.frame #dataframe used
print self.columnNames[:-1]
for a in self.columnNames[:-1]:
formulA += " {0} +".format(a)
formula = formulA[:-2] #there is always a \n behind, we don't want that
print "formula = " + formula
Y,X = dmatrices(formula, self.frame, return_type="dataframe")
Y = np.ravel(Y) #flatten Y to a 1D list
model = LogisticRegression() #from sklearn.linear_model
model = model.fit(X, Y)
print model.score(X, Y)
with the following outcome:
a0 a1 a2 a3 class
picture1 1 2 3 67 1
picture2 6 7 45 61 3
picture3 8 7 6 5 2
picture4 1 2 4 3 0
['a0', 'a1', 'a2', 'a3']
formula = class ~ a0 + a1 + a2 + a3
Traceback (most recent call last):
File "classification.py", line 80, in <module>
c.LogReg()
File "classification.py", line 61, in LogReg
Y,X = dmatrices(formula, self.frame, return_type="dataframe")
File "/<path>/python2.7/site-packages/patsy/highlevel.py", line 297, in dmatrices
NA_action, return_type)
File "/<path>/python2.7/site-packages/patsy/highlevel.py", line 152, in _do_highlevel_design
NA_action)
File "/<path>/python2.7/site-packages/patsy/highlevel.py", line 57, in _try_incr_builders
NA_action)
File "/<path>/python2.7/site-packages/patsy/build.py", line 660, in design_matrix_builders
NA_action)
File "/<path>/python2.7/site-packages/patsy/build.py", line 424, in _examine_factor_types
value = factor.eval(factor_states[factor], data)
File "/<path>/python2.7/site-packages/patsy/eval.py", line 485, in eval
return self._eval(memorize_state["eval_code"], memorize_state, data)
File "/<path>/python2.7/site-packages/patsy/eval.py", line 468, in _eval
code, inner_namespace=inner_namespace)
File "/<path>/python2.7/site-packages/patsy/compat.py", line 117, in call_and_wrap_exc
return f(*args, **kwargs)
File "/<path>/python2.7/site-packages/patsy/eval.py", line 125, in eval
code = compile(expr, source_name, "eval", self.flags, False)
File "<string>", line 1
class
^
SyntaxError: unexpected EOF while parsing
I do not see what goes wrong here, as the string does by my knowledge not contain the EOF character, nor does the Python code seem erroneous. Therefore, the question: Where does it go wrong (and preferably: , and how to fix it)?
P.S.: The software used are all the most recent stable packages as available on 04/09/2015.
Well, that was quick. By asking the question, I suddenly had color marking in the code, notifying me that 'class' is a protected name, and should not be used as a variable. Nano doesn't give those colors, leaving me blind.
Lesson learnt: Kids, don't do class as variable.

Sympy Can't differentiate wrt the variable

I am trying to evaluate a function (second derivative of another one) but Sympy seems to have a difficulty to do that... ?
from sympy import *
from sympy import Symbol
# Symbols
theta = Symbol('theta')
phi = Symbol('phi')
phi0 = Symbol('phi0')
H0 = Symbol('H0')
# Constants
a = 0.05
t = 100*1e-9
b = 0.05**2/(8*pi*1e-7)
c = 0.001/(4*pi*1e-7)
phi0 = 60*pi/180
H0 = -0.03/(4*pi*1e-7)
def m(theta,phi):
return Matrix([[sin(theta)*cos(phi), sin(theta)*cos(phi), cos(phi)]])
def h(phi0):
return Matrix([[cos(phi0), sin(phi0), 0]])
def k(theta,phi,phi0):
return m(theta,phi).dot(h(phi0))
def F(theta,phi,phi0,H0):
return -(t*a*H0)*k(theta,phi,phi0)+b*t*(cos(theta)**2)+c*t*(sin(2*theta)**2)+t*sin(theta)**4*sin(2*phi)**2
def F_theta(theta,phi,phi0,H0):
return simplify(diff(F(theta,phi,phi0,H0),theta))
def F_thetatheta(theta,phi,phi0,H0):
return simplify(diff(F_theta(theta,phi,phi0,H0),theta))
print F_thetatheta(theta,phi,phi0,H0), F_thetatheta(pi/2,phi,phi0,H0)
As seen below, the general function is evaluated but when I try to replace theta by pi/2 or another value, it does not work.
(4.0e-7*pi*sin(theta)**4*cos(2*phi)**2 - 4.0e-7*pi*sin(theta)**4 + 0.00125*sin(theta)**2 - 0.0001875*sqrt(3)*sin(theta)*cos(phi) - 0.0001875*sin(theta)*cos(phi) + 1.2e-6*pi*cos(2*phi)**2*cos(theta)**4 - 1.2e-6*pi*cos(2*phi)**2*cos(theta)**2 - 1.2e-6*pi*cos(theta)**4 + 1.2e-6*pi*cos(theta)**2 + 0.004*cos(2*theta)**2 - 0.002625)/pi
Traceback (most recent call last):
File "Test.py", line 46, in <module>
print F_thetatheta(theta,phi,phi0,H0), F_thetatheta(pi/2,phi,phi0,H0)
File "Test.py", line 29, in F_thetatheta
return simplify(diff(F_theta(theta,phi,phi0,H0),theta))
File "Test.py", line 27, in F_theta
return simplify(diff(F(theta,phi,phi0,H0),theta))
File "/usr/lib64/python2.7/site-packages/sympy/core/function.py", line 1418, in diff
return Derivative(f, *symbols, **kwargs)
File "/usr/lib64/python2.7/site-packages/sympy/core/function.py", line 852, in __new__
Can\'t differentiate wrt the variable: %s, %s''' % (v, count)))
ValueError:
Can't differentiate wrt the variable: pi/2, 1
The error means you can not differentiate with respect to a number, pi/2. Ie, you derive with respect to a variable (x, y...), not a number.
In an expression with several variables, you can substitute one of them (or more) by its value (or another expression) by using subs:
F_thetatheta(theta,phi,phi0,H0).subs(theta, pi/2)
Then, to evaluate it to the desired accuracy you can use evalf. Compare the two results:
F_thetatheta(theta,phi,phi0,H0).evalf(50, subs={theta:pi/2, phi:0})
F_thetatheta(theta,phi,phi0,H0).subs({theta: pi/2, phi:0})
You should probably have a look at the sympy documentation or follow the tutorial. The documentation is very good, and you can even try the examples in the browser and evaluate code.