If I run the following code:
class A(object) :
def __init__(self, x, y, z=3.0):
self.x = x
self.y = y
self.z = z
class B(A):
def __init__(self, a, b, c="c", *args, **kwargs):
super(B, self).__init__(*args, **kwargs)
self.a = a
self.b = b
self.c = c
if __name__=="__main__":
thing = B("a", "b", 1, 2)
print thing.x # expect 1
print thing.y # expect 2
print thing.z # expect 3
print thing.a # expect a
print thing.b # expect b
print thing.c # expect c
Instead I get :
Traceback (most recent call last):
File "H:/Python/Random Scripts/python_inheritance.py", line 23, in <module>
thing = B(1,2,"a","b")
File "H:/Python/Random Scripts/python_inheritance.py", line 15, in __init__
super(B, self).__init__(*args, **kwargs)
TypeError: __init__() takes at least 3 arguments (2 given)
It seems like python is parsing the third argument "a" as the kwarg argment c instead of as an arg. How do I get the behaviour that I expect?
I can obviously do :
class B(A):
def __init__(self, a, b, *args, **kwargs):
self.c = kwargs.pop("c", "c")
super(B, self).__init__(*args, **kwargs)
self.a = a
self.b = b
but it seems in every way horrible.
Here are two lines from your code, aligned to show which value is assigned to each name:
def __init__(self, a, b, c="c", *args, **kwargs):
thing = B("a", "b", 1, 2)
As you can see, the 1 is assigned to c, leaving only one argument in the variadic args list. The problem is that there are not two "classes" of argument in the way that you assume: whereas you call c a "kwarg argument" it isn't really defined as such, any more than a is. Both could be addressed by calling B(b="b", **some_dict) assuming some_dict had both an 'a' and a 'c' entry. Instead of a definitional dichotomy between args and kwargs, there are just arguments that have specified default values, and arguments that do not.
I think you're right that kwargs.pop() is your best bet. My code is littered with such "horrible" examples.
Related
I have one function where I am calculating the CPU usage of a test case. The function works, but I would like to append the result of the subtraction in a list for the further usage.
For example, first I subtract 10 and 15, which is -5. At this point the list looks like [-5]. Next I subtract 20 and 30, which is -10. Now I want the list to look like [-5, -10]. My current code is (python 2.7):
import psutil
class CPU():
def __init__(self):
self.cpu_start()
def cpu_start(self):
global a
a= psutil.cpu_percent(interval=1, percpu=False)
print a
def cpu_end(self):
global b
b = psutil.cpu_percent(interval=1, percpu=False)
print b
def diff(self):
c= a-b
list = []
list.append(c)
print list
def main():
CPU()
if __name__ == '__main__':
main()
Just make the diff function return a-b, and append that to an array:
import psutil
class CPU:
def __init__(self):
self.cpu_start()
self.list = []
self.a = 0
self.b = 0
self.c = 0
def cpu_start(self):
self.a = psutil.cpu_percent(interval=1, percpu=False)
return self.a
def cpu_end(self):
self.b = psutil.cpu_percent(interval=1, percpu=False)
return self.b
def diff(self):
self.c = self.cpu_start() - self.cpu_start()
return self.c
def main():
cpu = CPU()
results = []
while True:
results.append(cpu.diff())
print results
if __name__ == '__main__':
main()
Remember that when you're using a class function, you need to create an object of that class, such as cpu = CPU() - I'm creating an object called cpu of class CPU, initialised with nothing. Then the __init__ function will create a and b(created as self.a and self.b, because they're local) and store them locally in that class. The diff() function, takes no arguments, but returns the difference of a and b which are stored locally in that class. Then I create a list called results with no elements. I run cpu.diff(), which gets the difference from cpu_start() and cpu_end(), and append the result to the results array. This is run in a loop, constantly appending to the array and printing it.
Hope this helps.
So here is my code
class Shape(object):
def __init__(self, coords):
super(Shape, self).__init__()
self._coords = list(map(list, coords))
def move(self,distance):
self._coords = distance
def __getitem__(self,key):
return self._coords[key]
class Point(Shape):
def __init__(self,coords):
super(Point, self).__init__(coords)
if __name__ == '__main__':
p = Point((0, 0))
p.move((1, 1))
assert p[0, 0], p[0, 1] == (1, 1)
Basically I want to create a subclass Point from the parent class Shape.
The init part of shape shall stay the same and try to create a new point and pass the test under "main".
This code as it is now gets an error TypeError: 'int' object is not iterable
I am stuck of solutions to this as a beginner in python. What arguments can I pass to _coords to be accepted? How can I connect Point and Shape?
class Shape(object):
def __init__(self, coords):
super(Shape, self).__init__()
self._coords = list(map(list, [coords])) # <--- to have it iterable enclose it in []
def move(self,distance):
self._coords = distance
def __getitem__(self,key):
return self._coords[key]
class Point(Shape):
def __init__(self,coords):
super(Point, self).__init__(coords)
if __name__ == '__main__':
p = Point((0, 0))
p.move((1, 1))
# the self._coords is a list, so fetch them by index like
assert p[0], p[1] == (1, 1)
I have applied Logistic Regression on train set after splitting the data set into test and train sets, but I got the above error. I tried to work it out, and when i tried to print my response vector y_train in the console it prints integer values like 0 or 1. But when i wrote it into a file I found the values were float numbers like 0.0 and 1.0. If thats the problem, how can I over come it.
lenreg = LogisticRegression()
print y_train[0:10]
y_train.to_csv(path='ytard.csv')
lenreg.fit(X_train, y_train)
y_pred = lenreg.predict(X_test)
print metics.accuracy_score(y_test, y_pred)
StrackTrace is as follows,
Traceback (most recent call last):
File "/home/amey/prog/pd.py", line 82, in <module>
lenreg.fit(X_train, y_train)
File "/usr/lib/python2.7/dist-packages/sklearn/linear_model/logistic.py", line 1154, in fit
self.max_iter, self.tol, self.random_state)
File "/usr/lib/python2.7/dist-packages/sklearn/svm/base.py", line 885, in _fit_liblinear
" class: %r" % classes_[0])
ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: 0.0
Meanwhile I've gone across the link which was unanswered. Is there a solution.
The problem here is that your y_train vector, for whatever reason, only has zeros. It is actually not your fault, and its kind of a bug ( I think ). The classifier needs 2 classes or else it throws this error.
It makes sense. If your y_train vector only has zeros, ( ie only 1 class ), then the classifier doesn't really need to do any work, since all predictions should just be the one class.
In my opinion the classifier should still complete and just predict the one class ( all zeros in this case ) and then throw a warning, but it doesn't. It throws the error in stead.
A way to check for this condition is like this:
lenreg = LogisticRegression()
print y_train[0:10]
y_train.to_csv(path='ytard.csv')
if len(np.sum(y_train)) in [len(y_train),0]:
print "all one class"
#do something else
else:
#OK to proceed
lenreg.fit(X_train, y_train)
y_pred = lenreg.predict(X_test)
print metics.accuracy_score(y_test, y_pred)
TO overcome the problem more easily i would recommend just including more samples in you test set, like 100 or 1000 instead of 10.
I had the same problem using learning_curve:
train_sizes, train_scores, test_scores = learning_curve(estimator,
X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes,
scoring="f1", random_state=RANDOM_SEED, shuffle=True)
add the suffle parameter that will randomize the sets.
This doesn't prevent error from happening but it's a way to increase the chances to have both classes in subsets used by the function.
I found it to be because of only 1's or 0's wound up in my y_test since my sample size was really small. Try chaning your test_size value.
# python3
import numpy as np
from sklearn.svm import LinearSVC
def upgrade_to_work_with_single_class(SklearnPredictor):
class UpgradedPredictor(SklearnPredictor):
def __init__(self, *args, **kwargs):
self._single_class_label = None
super().__init__(*args, **kwargs)
#staticmethod
def _has_only_one_class(y):
return len(np.unique(y)) == 1
def _fitted_on_single_class(self):
return self._single_class_label is not None
def fit(self, X, y=None):
if self._has_only_one_class(y):
self._single_class_label = y[0]
else:
super().fit(X, y)
return self
def predict(self, X):
if self._fitted_on_single_class():
return np.full(X.shape[0], self._single_class_label)
else:
return super().predict(X)
return UpgradedPredictor
LinearSVC = upgrade_to_work_with_single_class(LinearSVC)
or hard-way (more right):
import numpy as np
from sklearn.svm import LinearSVC
from copy import deepcopy, copy
from functools import wraps
def copy_class(cls):
copy_cls = type(f'{cls.__name__}', cls.__bases__, dict(cls.__dict__))
for name, attr in cls.__dict__.items():
try:
hash(attr)
except TypeError:
# Assume lack of __hash__ implies mutability. This is NOT
# a bullet proof assumption but good in many cases.
setattr(copy_cls, name, deepcopy(attr))
return copy_cls
def upgrade_to_work_with_single_class(SklearnPredictor):
SklearnPredictor = copy_class(SklearnPredictor)
original_init = deepcopy(SklearnPredictor.__init__)
original_fit = deepcopy(SklearnPredictor.fit)
original_predict = deepcopy(SklearnPredictor.predict)
#staticmethod
def _has_only_one_class(y):
return len(np.unique(y)) == 1
def _fitted_on_single_class(self):
return self._single_class_label is not None
#wraps(SklearnPredictor.__init__)
def new_init(self, *args, **kwargs):
self._single_class_label = None
original_init(self, *args, **kwargs)
#wraps(SklearnPredictor.fit)
def new_fit(self, X, y=None):
if self._has_only_one_class(y):
self._single_class_label = y[0]
else:
original_fit(self, X, y)
return self
#wraps(SklearnPredictor.predict)
def new_predict(self, X):
if self._fitted_on_single_class():
return np.full(X.shape[0], self._single_class_label)
else:
return original_predict(self, X)
setattr(SklearnPredictor, '_has_only_one_class', _has_only_one_class)
setattr(SklearnPredictor, '_fitted_on_single_class', _fitted_on_single_class)
SklearnPredictor.__init__ = new_init
SklearnPredictor.fit = new_fit
SklearnPredictor.predict = new_predict
return SklearnPredictor
LinearSVC = upgrade_to_work_with_single_class(LinearSVC)
You can find the indexes of the first (or any) occurrence of each of the classes and concatenate them on top of the arrays and delete them from their original positions, that way there will be at least one instance of each class in the training set.
This error related to the dataset you are using, the dataset contains a class for example 1/benign, whereas it must contain two classes 1 and 0 or Benign and Attack.
I wrote this code in python 2.7 to find Fibonnaci series. But there is error in my code:
File "Fib.py", line 2, in <module>
class Fib:
File "Fib.py", line 21, in Fib
for n in Fib(4):
NameError: name 'Fib' is not defined
Can anyone resolve this bug?
class Fib:
def __init__(self,max):
self.max = max
def __iter__(self):
self.a=0
self.b = 1
return self
def __next__(self) :
fib = self.a
if fib > self.max :
raise StopIteration
a,b=b,self.a+self.b
return fib
for n in Fib(4):
print n
Disclaimer: I cannot reproduce your error from the code you posted (see below for my guess work). However, I still get errors, so I'll fix them.
From your posted code:
I get:
Traceback (most recent call last):
File "a.py", line 17, in <module>
for n in Fib(4):
TypeError: instance has no next() method
It seems, if your targeting python 2.7, that you got mixed up with python 3. The __next__ method was introduced in python 3 (in PEP 3114, if your interested). In python 2, use next. Also, as self must be used to access instance member variables, a,b=b,self.a+self.b should be self.a, self.b = self.b, self.a + self.b. This makes your code:
class Fib:
def __init__(self, max):
self.max = max
def __iter__(self):
self.a = 0
self.b = 1
return self
def next(self):
fib = self.a
if fib > self.max :
raise StopIteration
self.a, self.b = self.b, self.a + self.b
return fib
for n in Fib(4):
print n
Which produces the output:
0
1
1
2
3
Note that changing the next to __next__ and changing print n to print(n) makes this work in python 3 (but then not python 2. If you want both you need to forward next to __next__ and use brackets for print).
Guessed actual code:
Judging from your error, your original code probably looked like:
class Fib:
def __init__(self,max):
self.max = max
def __iter__(self):
self.a=0
self.b = 1
return self
def __next__(self) :
fib = self.a
if fib > self.max :
raise StopIteration
a,b=b,self.a+self.b
return fib
for n in Fib(4): # Note that this makes the loop part of the class body
print n
Indenting the for loop makes it part of the class body, and as the class name is a name not yet accessible, it raises a NameError. For a simpler example, try (it gives a similar error):
class A:
print A
Therefore, the error you experience is most likely just an indentation mixup. Nice idea using an iterator, though.
Easier method to implemented fibonacci series:
known = {0:0, 1:1}
def fibonacci(n) :
if n in known:
return known[n]
res = fibonacci(n-1) + fibonacci(n-2)
known[n] = res
return res
Fibonnaci Series with recursion
def fib(term_num):
if term_num == 0 or term_num ==1:
return term_num
return fib(term_num-2) + fib(term_num-1)
for x in range(1,11):
print(fib(x))
Output Below:
1
1
2
3
5
8
13
21
34
55
How can I interpret booleans (or '') as integers 0 or 1? so totals could be 0, 1 or 2, depending on the values of uno and dos.
class foo(models.Model)
uno = models.BooleanField()
dos = models.BooleanField()
total = models.PositiveSmallIntegerField(blank=True, default=int(0))
def save(self, *args, **kwargs):
# HUMDINGER....
self.total = int(self.uno) + int(self.dos)
super(Survey, self).save(*args, **kwargs) # Call the "real" save() method.
This is the error it is throwing for that line...
invalid literal for int() with base 10: ''
I'm surprised that your BooleanFields have the empty string as their value. Regardless, since booleans evaluate to 0 and 1 in a numeric context, you can just do:
self.total = bool(self.uno) + bool(self.dos)