fibonnaci Series code in python - python-2.7

I wrote this code in python 2.7 to find Fibonnaci series. But there is error in my code:
File "Fib.py", line 2, in <module>
class Fib:
File "Fib.py", line 21, in Fib
for n in Fib(4):
NameError: name 'Fib' is not defined
Can anyone resolve this bug?
class Fib:
def __init__(self,max):
self.max = max
def __iter__(self):
self.a=0
self.b = 1
return self
def __next__(self) :
fib = self.a
if fib > self.max :
raise StopIteration
a,b=b,self.a+self.b
return fib
for n in Fib(4):
print n

Disclaimer: I cannot reproduce your error from the code you posted (see below for my guess work). However, I still get errors, so I'll fix them.
From your posted code:
I get:
Traceback (most recent call last):
File "a.py", line 17, in <module>
for n in Fib(4):
TypeError: instance has no next() method
It seems, if your targeting python 2.7, that you got mixed up with python 3. The __next__ method was introduced in python 3 (in PEP 3114, if your interested). In python 2, use next. Also, as self must be used to access instance member variables, a,b=b,self.a+self.b should be self.a, self.b = self.b, self.a + self.b. This makes your code:
class Fib:
def __init__(self, max):
self.max = max
def __iter__(self):
self.a = 0
self.b = 1
return self
def next(self):
fib = self.a
if fib > self.max :
raise StopIteration
self.a, self.b = self.b, self.a + self.b
return fib
for n in Fib(4):
print n
Which produces the output:
0
1
1
2
3
Note that changing the next to __next__ and changing print n to print(n) makes this work in python 3 (but then not python 2. If you want both you need to forward next to __next__ and use brackets for print).
Guessed actual code:
Judging from your error, your original code probably looked like:
class Fib:
def __init__(self,max):
self.max = max
def __iter__(self):
self.a=0
self.b = 1
return self
def __next__(self) :
fib = self.a
if fib > self.max :
raise StopIteration
a,b=b,self.a+self.b
return fib
for n in Fib(4): # Note that this makes the loop part of the class body
print n
Indenting the for loop makes it part of the class body, and as the class name is a name not yet accessible, it raises a NameError. For a simpler example, try (it gives a similar error):
class A:
print A
Therefore, the error you experience is most likely just an indentation mixup. Nice idea using an iterator, though.

Easier method to implemented fibonacci series:
known = {0:0, 1:1}
def fibonacci(n) :
if n in known:
return known[n]
res = fibonacci(n-1) + fibonacci(n-2)
known[n] = res
return res

Fibonnaci Series with recursion
def fib(term_num):
if term_num == 0 or term_num ==1:
return term_num
return fib(term_num-2) + fib(term_num-1)
for x in range(1,11):
print(fib(x))
Output Below:
1
1
2
3
5
8
13
21
34
55

Related

How to return function results in an array?

I have one function where I am calculating the CPU usage of a test case. The function works, but I would like to append the result of the subtraction in a list for the further usage.
For example, first I subtract 10 and 15, which is -5. At this point the list looks like [-5]. Next I subtract 20 and 30, which is -10. Now I want the list to look like [-5, -10]. My current code is (python 2.7):
import psutil
class CPU():
def __init__(self):
self.cpu_start()
def cpu_start(self):
global a
a= psutil.cpu_percent(interval=1, percpu=False)
print a
def cpu_end(self):
global b
b = psutil.cpu_percent(interval=1, percpu=False)
print b
def diff(self):
c= a-b
list = []
list.append(c)
print list
def main():
CPU()
if __name__ == '__main__':
main()
Just make the diff function return a-b, and append that to an array:
import psutil
class CPU:
def __init__(self):
self.cpu_start()
self.list = []
self.a = 0
self.b = 0
self.c = 0
def cpu_start(self):
self.a = psutil.cpu_percent(interval=1, percpu=False)
return self.a
def cpu_end(self):
self.b = psutil.cpu_percent(interval=1, percpu=False)
return self.b
def diff(self):
self.c = self.cpu_start() - self.cpu_start()
return self.c
def main():
cpu = CPU()
results = []
while True:
results.append(cpu.diff())
print results
if __name__ == '__main__':
main()
Remember that when you're using a class function, you need to create an object of that class, such as cpu = CPU() - I'm creating an object called cpu of class CPU, initialised with nothing. Then the __init__ function will create a and b(created as self.a and self.b, because they're local) and store them locally in that class. The diff() function, takes no arguments, but returns the difference of a and b which are stored locally in that class. Then I create a list called results with no elements. I run cpu.diff(), which gets the difference from cpu_start() and cpu_end(), and append the result to the results array. This is run in a loop, constantly appending to the array and printing it.
Hope this helps.

implement simplification rule for special functions

I am defining two custom functions in Sympy, called phi and Phi. I know that Phi(x)+Phi(-x) == 1. How do I provide Sympy with this simplification rule? Can I specify this in my class definition?
Here is what I've done so far:
from sympy import Function
class phi(Function):
nargs = 1
def fdiff(self, argindex=1):
if argindex == 1:
return -1*self.args[0]*phi(self.args[0])
else:
raise ArgumentIndexError(self, argindex)
#classmethod
def eval(cls, arg):
# The function is even, so try to pull out factors of -1
if arg.could_extract_minus_sign():
return cls(-arg)
class Phi(Function):
nargs = 1
def fdiff(self, argindex=1):
if argindex == 1:
return phi(self.args[0])
else:
raise ArgumentIndexError(self, argindex)
For the curious, phi and Phi represent the Gaussian PDF and CDF, respectively. These are implemented in sympy.stats. But, in my case, it's easier to interpret results in terms of phi and Phi.
Based upon the comment by Stelios, Phi(x) should return 1-Phi(-x) if x is negative. Therefore, I modified Phi as follows:
class Phi(Function):
nargs = 1
def fdiff(self, argindex=1):
if argindex == 1:
return phi(self.args[0])
else:
raise ArgumentIndexError(self, argindex)
#classmethod
def eval(cls, arg):
# Phi(x) + Phi(-x) == 1
if arg.could_extract_minus_sign():
return 1-cls(-arg)

Python inheritance: confusing arg with kwarg

If I run the following code:
class A(object) :
def __init__(self, x, y, z=3.0):
self.x = x
self.y = y
self.z = z
class B(A):
def __init__(self, a, b, c="c", *args, **kwargs):
super(B, self).__init__(*args, **kwargs)
self.a = a
self.b = b
self.c = c
if __name__=="__main__":
thing = B("a", "b", 1, 2)
print thing.x # expect 1
print thing.y # expect 2
print thing.z # expect 3
print thing.a # expect a
print thing.b # expect b
print thing.c # expect c
Instead I get :
Traceback (most recent call last):
File "H:/Python/Random Scripts/python_inheritance.py", line 23, in <module>
thing = B(1,2,"a","b")
File "H:/Python/Random Scripts/python_inheritance.py", line 15, in __init__
super(B, self).__init__(*args, **kwargs)
TypeError: __init__() takes at least 3 arguments (2 given)
It seems like python is parsing the third argument "a" as the kwarg argment c instead of as an arg. How do I get the behaviour that I expect?
I can obviously do :
class B(A):
def __init__(self, a, b, *args, **kwargs):
self.c = kwargs.pop("c", "c")
super(B, self).__init__(*args, **kwargs)
self.a = a
self.b = b
but it seems in every way horrible.
Here are two lines from your code, aligned to show which value is assigned to each name:
def __init__(self, a, b, c="c", *args, **kwargs):
thing = B("a", "b", 1, 2)
As you can see, the 1 is assigned to c, leaving only one argument in the variadic args list. The problem is that there are not two "classes" of argument in the way that you assume: whereas you call c a "kwarg argument" it isn't really defined as such, any more than a is. Both could be addressed by calling B(b="b", **some_dict) assuming some_dict had both an 'a' and a 'c' entry. Instead of a definitional dichotomy between args and kwargs, there are just arguments that have specified default values, and arguments that do not.
I think you're right that kwargs.pop() is your best bet. My code is littered with such "horrible" examples.

ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: 0.0

I have applied Logistic Regression on train set after splitting the data set into test and train sets, but I got the above error. I tried to work it out, and when i tried to print my response vector y_train in the console it prints integer values like 0 or 1. But when i wrote it into a file I found the values were float numbers like 0.0 and 1.0. If thats the problem, how can I over come it.
lenreg = LogisticRegression()
print y_train[0:10]
y_train.to_csv(path='ytard.csv')
lenreg.fit(X_train, y_train)
y_pred = lenreg.predict(X_test)
print metics.accuracy_score(y_test, y_pred)
StrackTrace is as follows,
Traceback (most recent call last):
File "/home/amey/prog/pd.py", line 82, in <module>
lenreg.fit(X_train, y_train)
File "/usr/lib/python2.7/dist-packages/sklearn/linear_model/logistic.py", line 1154, in fit
self.max_iter, self.tol, self.random_state)
File "/usr/lib/python2.7/dist-packages/sklearn/svm/base.py", line 885, in _fit_liblinear
" class: %r" % classes_[0])
ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: 0.0
Meanwhile I've gone across the link which was unanswered. Is there a solution.
The problem here is that your y_train vector, for whatever reason, only has zeros. It is actually not your fault, and its kind of a bug ( I think ). The classifier needs 2 classes or else it throws this error.
It makes sense. If your y_train vector only has zeros, ( ie only 1 class ), then the classifier doesn't really need to do any work, since all predictions should just be the one class.
In my opinion the classifier should still complete and just predict the one class ( all zeros in this case ) and then throw a warning, but it doesn't. It throws the error in stead.
A way to check for this condition is like this:
lenreg = LogisticRegression()
print y_train[0:10]
y_train.to_csv(path='ytard.csv')
if len(np.sum(y_train)) in [len(y_train),0]:
print "all one class"
#do something else
else:
#OK to proceed
lenreg.fit(X_train, y_train)
y_pred = lenreg.predict(X_test)
print metics.accuracy_score(y_test, y_pred)
TO overcome the problem more easily i would recommend just including more samples in you test set, like 100 or 1000 instead of 10.
I had the same problem using learning_curve:
train_sizes, train_scores, test_scores = learning_curve(estimator,
X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes,
scoring="f1", random_state=RANDOM_SEED, shuffle=True)
add the suffle parameter that will randomize the sets.
This doesn't prevent error from happening but it's a way to increase the chances to have both classes in subsets used by the function.
I found it to be because of only 1's or 0's wound up in my y_test since my sample size was really small. Try chaning your test_size value.
# python3
import numpy as np
from sklearn.svm import LinearSVC
def upgrade_to_work_with_single_class(SklearnPredictor):
class UpgradedPredictor(SklearnPredictor):
def __init__(self, *args, **kwargs):
self._single_class_label = None
super().__init__(*args, **kwargs)
#staticmethod
def _has_only_one_class(y):
return len(np.unique(y)) == 1
def _fitted_on_single_class(self):
return self._single_class_label is not None
def fit(self, X, y=None):
if self._has_only_one_class(y):
self._single_class_label = y[0]
else:
super().fit(X, y)
return self
def predict(self, X):
if self._fitted_on_single_class():
return np.full(X.shape[0], self._single_class_label)
else:
return super().predict(X)
return UpgradedPredictor
LinearSVC = upgrade_to_work_with_single_class(LinearSVC)
or hard-way (more right):
import numpy as np
from sklearn.svm import LinearSVC
from copy import deepcopy, copy
from functools import wraps
def copy_class(cls):
copy_cls = type(f'{cls.__name__}', cls.__bases__, dict(cls.__dict__))
for name, attr in cls.__dict__.items():
try:
hash(attr)
except TypeError:
# Assume lack of __hash__ implies mutability. This is NOT
# a bullet proof assumption but good in many cases.
setattr(copy_cls, name, deepcopy(attr))
return copy_cls
def upgrade_to_work_with_single_class(SklearnPredictor):
SklearnPredictor = copy_class(SklearnPredictor)
original_init = deepcopy(SklearnPredictor.__init__)
original_fit = deepcopy(SklearnPredictor.fit)
original_predict = deepcopy(SklearnPredictor.predict)
#staticmethod
def _has_only_one_class(y):
return len(np.unique(y)) == 1
def _fitted_on_single_class(self):
return self._single_class_label is not None
#wraps(SklearnPredictor.__init__)
def new_init(self, *args, **kwargs):
self._single_class_label = None
original_init(self, *args, **kwargs)
#wraps(SklearnPredictor.fit)
def new_fit(self, X, y=None):
if self._has_only_one_class(y):
self._single_class_label = y[0]
else:
original_fit(self, X, y)
return self
#wraps(SklearnPredictor.predict)
def new_predict(self, X):
if self._fitted_on_single_class():
return np.full(X.shape[0], self._single_class_label)
else:
return original_predict(self, X)
setattr(SklearnPredictor, '_has_only_one_class', _has_only_one_class)
setattr(SklearnPredictor, '_fitted_on_single_class', _fitted_on_single_class)
SklearnPredictor.__init__ = new_init
SklearnPredictor.fit = new_fit
SklearnPredictor.predict = new_predict
return SklearnPredictor
LinearSVC = upgrade_to_work_with_single_class(LinearSVC)
You can find the indexes of the first (or any) occurrence of each of the classes and concatenate them on top of the arrays and delete them from their original positions, that way there will be at least one instance of each class in the training set.
This error related to the dataset you are using, the dataset contains a class for example 1/benign, whereas it must contain two classes 1 and 0 or Benign and Attack.

python 2.7 - is there a more succint way to do this series of yield statements (in python 3, "yield from" would help)

Situation:
Python 2.7 code that contains a number of "yield" statements. But the specs have changed.
Each yield calls a function that used to always return a value. Now the result is sometimes a value that should be yielded, but sometimes no value should be yielded.
Dumb Example:
BEFORE:
def always(x):
return 11 * x
def do_stuff():
# ... other code; each yield is buried inside an if or other flow construct ...
# ...
yield always(1)
# ...
yield always(6)
# ...
yield always(5)
print( list( do_stuff() ) )
=>
[11, 66, 55]
AFTER (if I could use Python 3, but that is not currently an option):
def maybe(x):
""" only keep odd value; returns list with 0 or 1 elements. """
result = 11 * x
return [result] if bool(result & 1) else []
def do_stuff():
# ...
yield from maybe(1)
# ...
yield from maybe(6)
# ...
yield from maybe(5)
=>
[11, 55]
AFTER (in Python 2.7):
def maybe(x):
""" only keep odd value; returns list with 0 or 1 elements. """
result = 11 * x
return [result] if bool(result & 1) else []
def do_stuff():
# ...
for x in maybe(1): yield x
# ...
for x in maybe(6): yield x
# ...
for x in maybe(5): yield x
NOTE: In the actual code I am translating, the "yields" are buried inside various flow-control constructs. And the "maybe" function has two parameters, and is more complex.
MY QUESTION:
Observe that each call to "maybe" returns either 1 value to yield, or 0 values to yield.
(It would be fine to change "maybe" to return the value, or to return None when there is no value, if that helps.)
Given this 0/1 situation, is there any more succinct way to code?
If as you say you can get away with returning None, then I'd leave the code as it was in the first place:
def maybe(x):
""" only keep odd value; returns either element or None """
result = 11 * x
if result & 1: return result
def do_stuff():
yield maybe(1)
yield maybe(6)
yield maybe(5)
but use a wrapped version instead which tosses the Nones, like:
def do_stuff_use():
return (x for x in do_stuff() if x is not None)
You could even wrap the whole thing up in a decorator, if you wanted:
import functools
def yield_not_None(f):
#functools.wraps(f)
def wrapper(*args, **kwargs):
return (x for x in f(*args, **kwargs) if x is not None)
return wrapper
#yield_not_None
def do_stuff():
yield maybe(1)
yield maybe(6)
yield maybe(5)
after which
>>> list(do_stuff())
[11, 55]